modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 18:29:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 18:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
eli335015/blockassist
|
eli335015
| 2025-09-22T17:17:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"majestic colorful antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T03:59:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- majestic colorful antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cornishteddy57/blockassist
|
cornishteddy57
| 2025-09-22T16:55:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy freckled camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T03:44:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy freckled camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/evwc_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_4_all_37_0.001_12800_3
|
winnieyangwannan
| 2025-09-22T16:35:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-22T16:33:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
npsulima87/blockassist
|
npsulima87
| 2025-09-22T16:14:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"opaque wise chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T17:17:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- opaque wise chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TM1550/my_awesome_qa_model
|
TM1550
| 2025-09-22T15:33:55Z | 32 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-09-19T15:59:14Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4873 |
| 2.8209 | 2.0 | 500 | 1.6982 |
| 2.8209 | 3.0 | 750 | 1.6069 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.6.0+cpu
- Datasets 4.0.0
- Tokenizers 0.22.0
|
nnilayy/dreamer_window_512-binary-arousal-Kfold-4-stride_512
|
nnilayy
| 2025-09-22T15:05:51Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-22T15:05:48Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_SCALE9_round3
|
MattBou00
| 2025-09-22T15:02:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T15:00:56Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/final-model")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/final-model")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/final-model")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
MattBou00/llama-3-2-1b-detox_v1f_SCALE9_round3-checkpoint-epoch-40
|
MattBou00
| 2025-09-22T14:48:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T14:46:33Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-40")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-40")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-40")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
aamijar/Llama-2-7b-hf-dora-r8-rte-epochs0
|
aamijar
| 2025-09-22T14:46:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T14:46:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
boringblobking/lora_model10000
|
boringblobking
| 2025-09-22T14:15:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T09:50:17Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** boringblobking
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ramazanbaris/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_scurrying_cat
|
ramazanbaris
| 2025-09-22T14:00:47Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am thick_scurrying_cat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T10:16:59Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am thick_scurrying_cat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mehuldamani/RLCR-math-sept21_startingFromScratch
|
mehuldamani
| 2025-09-22T13:22:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T04:51:01Z |
---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: RLCR-math-sept21_startingFromScratch
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for RLCR-math-sept21_startingFromScratch
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mehuldamani/RLCR-math-sept21_startingFromScratch", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mehuldamani/internalized_inf_scaling/runs/f39e4ned)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.48.3
- Pytorch: 2.5.1
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Jaw00/donut-v6
|
Jaw00
| 2025-09-22T12:17:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-22T10:16:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Team-Atom/act_pick_and_place
|
Team-Atom
| 2025-09-22T12:09:17Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Team-Atom/pick_and_place",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T12:09:04Z |
---
datasets: Team-Atom/pick_and_place
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Coercer/BatchTagger
|
Coercer
| 2025-09-22T12:01:41Z | 2 | 0 | null |
[
"region:us"
] | null | 2025-02-10T16:01:55Z |
If you got here, you might be searching for this:
Colab Implementation, where this specific repo is used.
https://colab.research.google.com/drive/1DKT5rFBTHhkyibVMK4SCYTJWHl2kaV3p?usp=sharing
Original implementation:
https://huggingface.co/RedRocket/JointTaggerProject
All credit goes to them.
|
liluckyl/Qwen3-0.6B-Gensyn-Swarm-dense_fast_bobcat
|
liluckyl
| 2025-09-22T11:53:18Z | 45 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am dense_fast_bobcat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T11:07:26Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am dense_fast_bobcat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/sage-reasoning-3b-i1-GGUF
|
mradermacher
| 2025-09-22T11:40:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"ko",
"fr",
"zh",
"es",
"base_model:sagea-ai/sage-reasoning-3b",
"base_model:quantized:sagea-ai/sage-reasoning-3b",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T09:43:04Z |
---
base_model: sagea-ai/sage-reasoning-3b
language:
- en
- ko
- fr
- zh
- es
library_name: transformers
license: llama3.2
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/sagea-ai/sage-reasoning-3b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#sage-reasoning-3b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/sage-reasoning-3b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ2_M.gguf) | i1-IQ2_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q2_K.gguf) | i1-Q2_K | 1.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q4_0.gguf) | i1-Q4_0 | 2.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q4_1.gguf) | i1-Q4_1 | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/sage-reasoning-3b-i1-GGUF/resolve/main/sage-reasoning-3b.i1-Q6_K.gguf) | i1-Q6_K | 3.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
qualiaadmin/eef0eb84-9e56-4afc-9aae-05204b8cf5e2
|
qualiaadmin
| 2025-09-22T11:33:29Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T11:31:28Z |
---
base_model: lerobot/smolvla_base
datasets: Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mradermacher/Tanzania-0.5B-GGUF
|
mradermacher
| 2025-09-22T11:24:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"creative",
"roleplay",
"story-telling",
"story-writing",
"en",
"dataset:practical-dreamer/RPGPT_PublicDomain-ShareGPT",
"dataset:Gryphe/Opus-WritingPrompts",
"base_model:XeTute/Tanzania-0.5B",
"base_model:quantized:XeTute/Tanzania-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T09:36:57Z |
---
base_model: XeTute/Tanzania-0.5B
datasets:
- practical-dreamer/RPGPT_PublicDomain-ShareGPT
- Gryphe/Opus-WritingPrompts
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- creative
- roleplay
- story-telling
- story-writing
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/XeTute/Tanzania-0.5B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Tanzania-0.5B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF/resolve/main/Tanzania-0.5B.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ArjunRavi/mental-health-finetune
|
ArjunRavi
| 2025-09-22T11:21:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:14:07Z |
---
base_model: NousResearch/Llama-2-7b-chat-hf
library_name: transformers
model_name: mental-health-finetune
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ArjunRavi/results", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758539884
|
poolkiltzn
| 2025-09-22T11:19:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T11:19:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qualiaadmin/4dff72b5-d88f-4b9a-b0d6-93b3dc86fe05
|
qualiaadmin
| 2025-09-22T11:13:23Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T11:11:38Z |
---
base_model: lerobot/smolvla_base
datasets: Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
felixZzz/fgktxwe7-step_00500
|
felixZzz
| 2025-09-22T11:06:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T11:05:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND5-checkpoint-epoch-80
|
MattBou00
| 2025-09-22T11:00:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:59:04Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-80")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-80")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-80")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Adleeene/GPT-OSS-20B_FT
|
Adleeene
| 2025-09-22T10:33:23Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"gpt_oss",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:06:27Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Adleeene
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
workemailty/blockassist
|
workemailty
| 2025-09-22T10:31:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"coiled soft hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T11:08:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- coiled soft hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
caphe/paa10
|
caphe
| 2025-09-22T10:26:56Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T09:50:23Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Reihaneh/wav2vec2_fy_nl_best_frisian_1
|
Reihaneh
| 2025-09-22T10:17:11Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:17:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_RETRY_SAMPLING_scale10_Round3-checkpoint-epoch-100
|
MattBou00
| 2025-09-22T10:12:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:11:07Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-100")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-100")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-100")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64-0922083940-epoch-1
|
vectorzhou
| 2025-09-22T09:51:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:51:28Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64-0922083940-epoch-1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/pwuxhkms)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ArrayCats/LoRA-1.5
|
ArrayCats
| 2025-09-22T09:46:54Z | 0 | 8 | null |
[
"license:unknown",
"region:us"
] | null | 2023-07-23T00:06:13Z |
---
license: unknown
---
里面存在一些SDXL/PONY的模型,但本人未来打算分开放置,故不再更新该Models下的SDXL/PONY模型(但也不会主动删除)。
|
murugeshmarvel/QAD_v_2.3
|
murugeshmarvel
| 2025-09-22T09:43:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"quantization",
"fp8",
"fine-tuned",
"qat",
"Clinical",
"EDC Data",
"conversational",
"en",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-09-22T09:32:09Z |
---
language:
- en
license: llama3.3
library_name: transformers
tags:
- text-generation
- llama
- quantization
- fp8
- fine-tuned
- qat
- Clinical
- EDC Data
base_model: meta-llama/Llama-3.3-70B-Instruct
model_type: llama
quantization:
method: FP8
weights: 8-bit float
activations: 8-bit float
backend: llmcompressor
inference:
framework: vllm
quantization: fp8
pipeline_tag: text-generation
---
# QAD_v_2.3
## Model Description
This is a **FP8 quantized** version of Llama 3.3 70B Instruct that has been fine-tuned with EDC dataset using **Quantization-Aware Training (QAT)** and then post-training quantized to FP8 format for production deployment.
## Key Features
- 🚀 **50% Memory Reduction**: ~66GB vs ~131GB for BF16 model
- ⚡ **Optimized for Production**: Designed for high-throughput inference
- 🎯 **QAT Fine-tuned**: Maintains accuracy while being quantization-aware
- 🔧 **FP8 Quantized**: Uses 8-bit floating point for weights and activations
- 🏭 **vLLM Compatible**: Optimized for vLLM inference engine
|
sameeahameed/DILC-llama-3.2-3b-persona-all-disaster-IDRISI
|
sameeahameed
| 2025-09-22T09:43:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T09:43:13Z |
---
base_model: unsloth/llama-3.2-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sameeahameed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chillies/uit-dsc-2025
|
chillies
| 2025-09-22T09:38:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T09:34:07Z |
---
base_model: unsloth/qwen3-4b-base-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** chillies
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-base-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mastefan/2025-24679-text-distilbert-predictor
|
mastefan
| 2025-09-22T09:36:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"food",
"dishes",
"class_exercise",
"en",
"dataset:aedupuga/food-description-text",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T06:51:18Z |
---
Author: Michael Stefanov
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
- food
- dishes
- class_exercise
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: 2025-24679-text-distilbert-predictor
results: []
datasets:
- aedupuga/food-description-text
language:
- en
pipeline_tag: text-classification
---
# 2025-24679-text-distilbert-predictor
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0451
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
## Model description
Purpose:
This model to be used for in-class assignments and activity associated with Course 24679 at CMU.
Preprocessing/Augmentation:
The preprocessing of this data includes splitting the dataset into train and
test, and using autoML to predict whether the book from the dataset will be
reccomended The peredictor weas fitting using a 20 minute time limit, in
addition to a best_quality present and auto_stacking to improve accuracy
over the constrained timeframe.
## Intended uses & limitations
Intented use/limits:
The intended use of this dataset is exclusively for classroom and assignment
use. Please request permission if you wish to use it elsewhere
Ethical notes:
The AI used in this sample is a fairly benign use case handling basic
text manipulation. However, please review the environmental impacts of
large-scale usage in exchange for implementing the necessary good such
technology brings.
AI Usage disclosure:
Original code to assist with data augmentation was developed with use of
Google Gemini in combination with course material from 24679 at CMU
## Training and evaluation data
### limitations:
Dataset trained on is a simple text dataset for books, and has been
trained to classify the reccomended column which is a simple binary. This
training model is quite minimal as a result
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.0527 | 1.0 | 80 | 0.0426 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0088 | 2.0 | 160 | 0.0070 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.006 | 3.0 | 240 | 0.0041 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0042 | 4.0 | 320 | 0.0032 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0042 | 5.0 | 400 | 0.0029 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
vikki1998/new_Vidhilekhai_v1.1.0-merged
|
vikki1998
| 2025-09-22T09:35:47Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-16T10:06:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bema001/business-news-generator
|
Bema001
| 2025-09-22T09:35:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:34:53Z |
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- generated_from_trainer
model-index:
- name: business-news-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# business-news-generator
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1856 | 0.32 | 200 | 3.4159 |
| 2.9297 | 0.64 | 400 | 3.3019 |
| 2.7244 | 0.96 | 600 | 3.1694 |
| 1.6931 | 1.28 | 800 | 3.2505 |
| 1.4945 | 1.6 | 1000 | 3.2185 |
| 1.4384 | 1.92 | 1200 | 3.2075 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
obranzell/results
|
obranzell
| 2025-09-22T09:34:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T09:34:00Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.2571 | 0.894 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
asulova/hamlet-dpo
|
asulova
| 2025-09-22T09:28:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:asulova/hamlet-merged",
"dpo",
"lora",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:2305.18290",
"base_model:asulova/hamlet-merged",
"region:us"
] |
text-generation
| 2025-09-22T09:28:08Z |
---
base_model: asulova/hamlet-merged
library_name: peft
model_name: dpo_hamlet
tags:
- base_model:adapter:asulova/hamlet-merged
- dpo
- lora
- transformers
- trl
- unsloth
licence: license
pipeline_tag: text-generation
---
# Model Card for dpo_hamlet
This model is a fine-tuned version of [asulova/hamlet-merged](https://huggingface.co/asulova/hamlet-merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- PEFT 0.17.1
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1758533128
|
kapalbalap
| 2025-09-22T09:26:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T09:26:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_opaque_buffalo
|
fatepurriyaz
| 2025-09-22T09:22:07Z | 46 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am foxy_opaque_buffalo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T14:34:34Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am foxy_opaque_buffalo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/vinallama-7b-iSMART-GGUF
|
mradermacher
| 2025-09-22T09:20:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"vi",
"base_model:lefantom00/vinallama-7b-iSMART",
"base_model:quantized:lefantom00/vinallama-7b-iSMART",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T08:16:57Z |
---
base_model: lefantom00/vinallama-7b-iSMART
language:
- vi
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/lefantom00/vinallama-7b-iSMART
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#vinallama-7b-iSMART-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.IQ4_XS.gguf) | IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q6_K.gguf) | Q6_K | 5.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-iSMART-GGUF/resolve/main/vinallama-7b-iSMART.f16.gguf) | f16 | 13.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CATIE-AQ/mistral7B-FR-InstructNLP-LoRA
|
CATIE-AQ
| 2025-09-22T09:09:26Z | 5 | 3 |
peft
|
[
"peft",
"text-generation",
"fr",
"dataset:CATIE-AQ/DFP",
"arxiv:2106.09685",
"arxiv:1910.09700",
"arxiv:2310.06825",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"co2_eq_emissions",
"region:us"
] |
text-generation
| 2023-10-06T07:33:15Z |
---
language: fr
license: apache-2.0
datasets:
- CATIE-AQ/DFP
library_name: peft
co2_eq_emissions: 110
base_model:
- mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
---
# Adapter for Mistral-7B-v0.1 fine-tuned on DFP
## Adapter Description
This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library
and allows the base model [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) to be fine-tuned on 1,280,000 random rows of the [Dataset of French Prompts (DFP)](https://huggingface.co/datasets/CATIE-AQ/DFP) using the [LoRA](https://arxiv.org/abs/2106.09685) method.
We have trained 21,260,288 parameters out of 7,262,992,384, i.e. 0.23%.
## Usage
### Code
```py
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
config = PeftConfig.from_pretrained("CATIE-AQ/mistral7B-FR-InstructNLP-LoRA")
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
model = PeftModel.from_pretrained(model, "CATIE-AQ/mistral7B-FR-InstructNLP-LoRA")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
tokenizer.pad_token = tokenizer.eos_token
prompt = '''Prenez l'énoncé suivant comme vrai : "Euh, non, pour être honnête, je n'ai jamais lu aucun des livres que j'étais supposé lire."\n Alors l'énoncé suivant : "Je n'ai pas lu beaucoup de livres." est "vrai", "faux", ou "incertain" ?'''
model_input = tokenizer(prompt, return_tensors="pt").to("cuda")
model.eval()
with torch.no_grad():
print(tokenizer.decode(model.generate(**model_input, max_new_tokens=100, pad_token_id=2)[0], skip_special_tokens=True))
```
### Examples
Some examples from the test split of [Dataset of French Prompts (DFP)](https://huggingface.co/datasets/CATIE-AQ/DFP):
**Input**:
```
Prenez l'énoncé suivant comme vrai : "Euh, non, pour être honnête, je n'ai jamais lu aucun des livres que j'étais supposé lire."\n Alors l'énoncé suivant : "Je n'ai pas lu beaucoup de livres." est "vrai", "faux", ou "incertain" ?
```
**Output**:
```
vrai
```
**Input**:
```
Commentaire du produit : "Voilà un film excellement tourné, scénarisé et, surtout, joué –il nous importe tellement qu'un film soit bien joué, indépendamment du personnage, du côté de la morale où il est, voire de l'histoire ! Excellement joué y compris par les acteurs secondaires, comme Dean Norris (Under The Dome, Breaking Bad) ou Vincent d\'Onofrio (New York Section Criminelle). C'est d'ailleurs amusant de remarquer, parmi ces acteurs secondaires, le patriarche de la famille de policiers new-yorkais de la série télé « Blue Bloods » (Len Cariou) : c'est lui qui donne justement le la du film, un la qui paraît subversif mais qui n'est que pro-arme, bien américain (Cf. le fameux deuxième amendement de la Constitution des États-Unis) ; c'est lui qui est l'étincelle quand il démontre sur le terrain qu'il « vaut se protéger soi-même ». Protéger les gentils, et les siens, c'est en effet le sujet du film. Et c'est aussi notre problème à nous (comment ferions-nous, nous ?). Dès la lecture du synopsis, Death Wish rappelle "Un justicier dans la ville" avec Charles Bronson ; d'ailleurs au Québec, ils ont repris ce titre, et ce n'est pas idiot vu que le titre en anglais ne vaut pas mieux que sa traduction en français (pulsion de mort). Mais peu importe qu'il s'agisse d'un remake –d'ailleurs qui se souvient du film avec Charles Bronson –à revoir peut-être? Il s'agit avant tout de la rage d'être entouré d'abrutis et de criminels, rage bien mise en scène, peu à peu, dès le début, comme si tout y participait (les ombres de Chicago, les phares dans la nuit, la lourdeur des nuages bas). Mais pas de rage chez le héros principal (Bruce Willis), qui ne se voit pas entouré d'abrutis et de criminels (un peu à cause de son métier). Il s'agit ensuite de la force naturelle de l'intelligence sur l'abruti, et l'on est satisfait que ce dernier se fasse avoir en toute beauté. Il s'agit enfin du risque de glissade (vers la vengeance aveugle), traduite par quelques images à ne pas mettre sous tous les yeux." Ce commentaire dépeint le produit sous un angle négatif ou positif ?
```
**Output**:
```
pos
```
**Input**:
```
Parmi la liste d'intentions suivantes : "audio_volume_other, play_music, iot_hue_lighton, general_greet, calendar_set, audio_volume_down, social_query, audio_volume_mute, iot_wemo_on, iot_hue_lightup, audio_volume_up, iot_coffee, takeaway_query, qa_maths, play_game, cooking_query, iot_hue_lightdim, iot_wemo_off, music_settings, weather_query, news_query, alarm_remove, social_post, recommendation_events, transport_taxi, takeaway_order, music_query, calendar_query, lists_query, qa_currency, recommendation_movies, general_joke, recommendation_locations, email_querycontact, lists_remove, play_audiobook, email_addcontact, lists_createoradd, play_radio, qa_stock, alarm_query, email_sendemail, general_quirky, music_likeness, cooking_recipe, email_query, datetime_query, transport_traffic, play_podcasts, iot_hue_lightchange, calendar_remove, transport_query, transport_ticket, qa_factoid, iot_cleaning, alarm_set, datetime_convert, iot_hue_lightoff, qa_definition, music_dislikeness",\n indiquer celle présente dans le texte : quel jour de la semaine est le quinze août ?
```
**Output**:
```
datetime_query
```
**Input**:
```
Simplifier la phrase suivante en la divisant tout en conservant son sens complet : "Le Centre international de science et de technologie a la personnalite juridique et jouit de la capacite juridique la plus etendue reconnue aux personnes morales en vertu des lois applicables dans la Communaute et, en particulier, peut contracter, acquerir ou aliener des biens meubles et immeubles et etre partie a des poursuites judiciaires." Version simplifiée :
```
**Output**:
```
Le Centre international pour la science et la technologie est dote de la personnalite juridique. Il jouit de toute la capacite reconnue aux personnes morales par les lois applicables dans la Communaute et est ainsi plus particulierement habilite a contracter, a acquerir ou aliener des biens meubles ou immeubles et a ester en justice.
```
### In practice
This adapter was trained quickly (in just 11h), with a view of PoC and testing the recently released Mistral model.
More complete work would involve training on more data (1M280 lines used, whereas DFP contains over 113M) and for longer (see image below, where the loss function should be able to decrease further).
It would also be possible to test other adapters and hyperparameters.

## Training procedure
```
import os
from datasets import load_dataset
import torch
import accelerate
from transformers import AutoTokenizer, MistralForCausalLM, BitsAndBytesConfig, Trainer, TrainingArguments, DataCollatorForLanguageModeling, DataCollatorWithPadding
from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training
os.environ["WANDB_PROJECT"] = "mistral-7B-FR-Instruct-LORA"
# Load tokenizer and data
model_name = "mistralai/Mistral-7B-v0.1"
max_length=1024
tokenizer = AutoTokenizer.from_pretrained(
model_name,
model_max_length=max_length,
padding_side="left", # inportant for causality
add_eos_token=True)
tokenizer.pad_token = tokenizer.eos_token
def preprocess_data(x):
inputs = x["inputs"]
targets = x["targets"]
prompts = [inputs[i] + " " + targets[i] for i in range(len(inputs))]
inputs = tokenizer(
prompts,
truncation=True,
max_length=max_length,
padding=False
)
return inputs
# Load and tokenize data
train_dataset = load_dataset("CATIE-AQ/DFP", split="train", num_proc=16)
valid_dataset = load_dataset("CATIE-AQ/DFP", split="validation", num_proc=16)
# Sample a random subset
train_dataset = train_dataset.shuffle().select(range(1280000))
valid_dataset = valid_dataset.shuffle().select(range(500))
tokenized_train_dataset = train_dataset.map(preprocess_data, remove_columns=train_dataset.column_names, batched=True, batch_size=20)
tokenized_val_dataset = valid_dataset.map(preprocess_data, remove_columns=valid_dataset.column_names, batched=True, batch_size=20)
tokenized_train_dataset = tokenized_train_dataset.with_format("torch")
tokenized_val_dataset = tokenized_val_dataset.with_format("torch")
# Load model
# Optionnal quantization for QLoRA
'''bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)'''
# Flash Attention is only available on Ampere architectures (A100)!
model = MistralForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, use_flash_attention_2=True)
# Prepare LoRA
config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
"lm_head",
],
bias="none",
lora_dropout=0.05,
task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)
training_args = TrainingArguments(
output_dir="mistral7B-FR-Instruct",
remove_unused_columns=True,
warmup_steps=1000,
per_device_train_batch_size=4,
gradient_accumulation_steps=32,
max_steps=10000,
learning_rate=5e-5,
lr_scheduler_type="linear",
logging_steps=50,
fp16=True,
optim="adamw_torch", # paged_adamw_8bit for QLoRA
logging_dir="./logs",
save_strategy="steps",
save_steps=1000,
evaluation_strategy="steps",
eval_steps=500,
do_eval=True,
report_to="wandb"
)
class DynamicDataCollator:
def __init__(self, tokenizer):
self.tokenizer = tokenizer
def __call__(self, features):
batch = self.tokenizer.pad(
features,
padding="longest",
max_length=max_length,
pad_to_multiple_of=8
)
labels = batch["input_ids"].clone()
labels[labels == self.tokenizer.pad_token_id] = -100 # ignore padding indices for the loss
labels[:, -1] = self.tokenizer.eos_token_id # except final eos
batch["labels"] = labels
return batch
trainer = Trainer(
model=model,
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_val_dataset,
args=training_args,
data_collator=DynamicDataCollator(tokenizer)
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
```
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** A100 PCIe 40/80GB
- **Hours used:** 11h
- **Cloud Provider:** Private Infrastructure
- **Carbon Efficiency (kg/kWh):** 0.041kg (estimated from [electricitymaps](https://app.electricitymaps.com/zone/FR) for the day of October 6, 2023.)
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 0.11 kg eq. CO2
## Citations
### PEFT library
```
@Misc{peft,
title = {PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods},
author = {Sourab Mangrulkar and Sylvain Gugger and Lysandre Debut and Younes Belkada and Sayak Paul and Benjamin Bossan},
howpublished = {\url{https://github.com/huggingface/peft}},
year = {2022}
}
```
### Mistral-7B-Instruct-v0.1
```
@misc{jiang2023mistral,
title={Mistral 7B},
author={Albert Q. Jiang and Alexandre Sablayrolles and Arthur Mensch and Chris Bamford and Devendra Singh Chaplot and Diego de las Casas and Florian Bressand and Gianna Lengyel and Guillaume Lample and Lucile Saulnier and Lélio Renard Lavaud and Marie-Anne Lachaux and Pierre Stock and Teven Le Scao and Thibaut Lavril and Thomas Wang and Timothée Lacroix and William El Sayed},
year={2023},
eprint={2310.06825},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### DFP
```
@misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
```
### LoRA
```
@misc{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
year={2021},
eprint={2106.09685},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Guraha/TestCaseGenerator
|
Guraha
| 2025-09-22T08:57:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T08:57:28Z |
---
license: apache-2.0
---
|
vangard703/oxe_1_ep_RL_50step
|
vangard703
| 2025-09-22T08:26:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-22T08:15:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
software-mansion/react-native-executorch-bk-sdm-tiny
|
software-mansion
| 2025-09-22T08:21:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-22T07:36:55Z |
---
license: creativeml-openrail-m
---
# Introduction
This repository hosts the [BK-SDM-Tiny v-prediction variant](https://huggingface.co/vivym/bk-sdm-tiny-vpred) model for the [React Native ExecuTorch](https://www.npmjs.com/package/react-native-executorch) library. It includes the model exported for xnnpack backend in `.pte` format, ready for use in the **ExecuTorch** runtime.
If you'd like to run these models in your own ExecuTorch runtime, refer to the [official documentation](https://pytorch.org/executorch/stable/index.html) for setup instructions.
## Compatibility
If you intend to use this models outside of React Native ExecuTorch, make sure your runtime is compatible with the **ExecuTorch** version used to export the `.pte` files. For more details, see the compatibility note in the [ExecuTorch GitHub repository](https://github.com/pytorch/executorch/blob/11d1742fdeddcf05bc30a6cfac321d2a2e3b6768/runtime/COMPATIBILITY.md?plain=1#L4). If you work with React Native ExecuTorch, the constants from the library will guarantee compatibility with runtime used behind the scenes.
These models were exported using v0.6.0 version of ExecuTorch and **no forward compatibility** is guaranteed. Older versions of the runtime may not work with these files.
|
chinmayee-huggingface/ppo-LunarLander-v3
|
chinmayee-huggingface
| 2025-09-22T07:59:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-22T07:22:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 300.50 +/- 19.13
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kitsunea/modernbert-assignment1
|
kitsunea
| 2025-09-22T07:53:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T07:45:28Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: modernbert-assignment1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-assignment1
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2512
- Accuracy: 0.9225
- F1: 0.9224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 313 | 0.2393 | 0.9195 | 0.9193 |
| 0.2689 | 2.0 | 626 | 0.2512 | 0.9225 | 0.9224 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
bedio/pythia-410m_init_base
|
bedio
| 2025-09-22T07:38:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T07:38:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tchiayan/paligemma-invoice
|
tchiayan
| 2025-09-22T07:37:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T01:21:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jac22/video
|
jac22
| 2025-09-22T07:25:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T07:46:46Z |
---
license: apache-2.0
---
|
Bavantha11/Pixelcopter-PLE-v0
|
Bavantha11
| 2025-09-22T07:24:31Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-19T01:52:43Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.00 +/- 18.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Hyperbyte-TTS-Experimental/Orpheus_TTS_3B_unsloth_Singlish_fine_tuned
|
Hyperbyte-TTS-Experimental
| 2025-09-22T07:08:18Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-14T01:03:57Z |
---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Hyperbyte-TTS-Experimental
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758524422
|
poolkiltzn
| 2025-09-22T07:01:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T07:01:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BKM1804/708823db-e4d1-4024-a8ab-039128bbc64d
|
BKM1804
| 2025-09-22T06:40:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T06:00:23Z |
---
library_name: transformers
tags:
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OpenMOSE/RWKV-Qwen3-15B
|
OpenMOSE
| 2025-09-22T06:40:05Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T06:21:54Z |
---
license: apache-2.0
---
|
epreep/topic-classifier-finetuned
|
epreep
| 2025-09-22T06:33:02Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:pongjin/roberta_with_kornli",
"base_model:finetune:pongjin/roberta_with_kornli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T06:32:48Z |
---
library_name: transformers
license: apache-2.0
base_model: pongjin/roberta_with_kornli
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-classifier-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-classifier-finetuned
This model is a fine-tuned version of [pongjin/roberta_with_kornli](https://huggingface.co/pongjin/roberta_with_kornli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2698
- Accuracy: 0.9271
- F1 Macro: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 2.6509 | 0.0730 | 100 | 1.6595 | 0.7331 | 0.7135 |
| 1.1192 | 0.1459 | 200 | 0.6936 | 0.8678 | 0.8659 |
| 0.6712 | 0.2189 | 300 | 0.5144 | 0.8881 | 0.8881 |
| 0.5925 | 0.2919 | 400 | 0.4652 | 0.8915 | 0.8921 |
| 0.515 | 0.3648 | 500 | 0.4157 | 0.9000 | 0.8999 |
| 0.4675 | 0.4378 | 600 | 0.4020 | 0.8990 | 0.8990 |
| 0.4408 | 0.5108 | 700 | 0.3746 | 0.9039 | 0.9038 |
| 0.4237 | 0.5837 | 800 | 0.3597 | 0.9034 | 0.9041 |
| 0.4147 | 0.6567 | 900 | 0.3420 | 0.9057 | 0.9054 |
| 0.3874 | 0.7297 | 1000 | 0.3167 | 0.9121 | 0.9118 |
| 0.3614 | 0.8026 | 1100 | 0.3415 | 0.9081 | 0.9073 |
| 0.3651 | 0.8756 | 1200 | 0.3207 | 0.9097 | 0.9098 |
| 0.326 | 0.9486 | 1300 | 0.3178 | 0.9147 | 0.9142 |
| 0.3455 | 1.0212 | 1400 | 0.3235 | 0.9127 | 0.9120 |
| 0.2684 | 1.0941 | 1500 | 0.3038 | 0.9151 | 0.9150 |
| 0.2593 | 1.1671 | 1600 | 0.3101 | 0.9127 | 0.9121 |
| 0.2639 | 1.2401 | 1700 | 0.2992 | 0.9144 | 0.9147 |
| 0.2595 | 1.3130 | 1800 | 0.3078 | 0.9146 | 0.9144 |
| 0.2681 | 1.3860 | 1900 | 0.2959 | 0.9156 | 0.9157 |
| 0.2578 | 1.4590 | 2000 | 0.2909 | 0.9187 | 0.9183 |
| 0.2555 | 1.5319 | 2100 | 0.3025 | 0.9155 | 0.9149 |
| 0.2581 | 1.6049 | 2200 | 0.2815 | 0.9203 | 0.9201 |
| 0.2478 | 1.6779 | 2300 | 0.2833 | 0.9219 | 0.9216 |
| 0.2428 | 1.7508 | 2400 | 0.2831 | 0.9203 | 0.9202 |
| 0.2638 | 1.8238 | 2500 | 0.2710 | 0.9249 | 0.9248 |
| 0.2462 | 1.8968 | 2600 | 0.2799 | 0.9209 | 0.9208 |
| 0.2526 | 1.9697 | 2700 | 0.2826 | 0.9187 | 0.9189 |
| 0.2147 | 2.0423 | 2800 | 0.2718 | 0.9242 | 0.9241 |
| 0.1757 | 2.1153 | 2900 | 0.2817 | 0.9248 | 0.9248 |
| 0.1727 | 2.1883 | 3000 | 0.2821 | 0.9237 | 0.9235 |
| 0.1836 | 2.2612 | 3100 | 0.2875 | 0.9209 | 0.9211 |
| 0.1657 | 2.3342 | 3200 | 0.2767 | 0.9249 | 0.9248 |
| 0.1708 | 2.4072 | 3300 | 0.2757 | 0.9237 | 0.9237 |
| 0.1693 | 2.4801 | 3400 | 0.2752 | 0.9233 | 0.9233 |
| 0.1836 | 2.5531 | 3500 | 0.2793 | 0.9225 | 0.9224 |
| 0.1651 | 2.6260 | 3600 | 0.2790 | 0.9237 | 0.9236 |
| 0.1675 | 2.6990 | 3700 | 0.2741 | 0.9247 | 0.9247 |
| 0.1661 | 2.7720 | 3800 | 0.2717 | 0.9264 | 0.9263 |
| 0.1681 | 2.8449 | 3900 | 0.2718 | 0.9249 | 0.9249 |
| 0.1672 | 2.9179 | 4000 | 0.2695 | 0.9273 | 0.9271 |
| 0.1693 | 2.9909 | 4100 | 0.2698 | 0.9271 | 0.9270 |
### Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu128
- Datasets 4.1.1
- Tokenizers 0.22.1
|
AnveshAI/Anvesh-Image-Generation
|
AnveshAI
| 2025-09-22T06:28:37Z | 0 | 0 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2025-09-22T06:27:46Z |
---
license: mit
language:
- en
---
|
hyongok2/embedingmodels
|
hyongok2
| 2025-09-22T06:26:46Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T13:59:02Z |
---
license: apache-2.0
---
|
onnxmodelzoo/resnet50d_Opset18
|
onnxmodelzoo
| 2025-09-22T06:12:59Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T06:12:51Z |
---
language: en
license: apache-2.0
model_name: resnet50d_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnet50_gn_Opset17
|
onnxmodelzoo
| 2025-09-22T06:11:51Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T06:11:41Z |
---
language: en
license: apache-2.0
model_name: resnet50_gn_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnet34_Opset17
|
onnxmodelzoo
| 2025-09-22T06:10:41Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T06:10:33Z |
---
language: en
license: apache-2.0
model_name: resnet34_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnet26_Opset18
|
onnxmodelzoo
| 2025-09-22T06:08:44Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T06:08:37Z |
---
language: en
license: apache-2.0
model_name: resnet26_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnet26_Opset17
|
onnxmodelzoo
| 2025-09-22T06:08:36Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T06:08:29Z |
---
language: en
license: apache-2.0
model_name: resnet26_Opset17.onnx
tags:
- Computer_Vision
---
|
SentiChain/aparecium-seq2seq-reverser
|
SentiChain
| 2025-09-22T06:05:19Z | 0 | 0 |
pytorch
|
[
"pytorch",
"transformer-decoder",
"seq2seq",
"embeddings",
"mpnet",
"text-reconstruction",
"crypto",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2025-03-31T21:43:18Z |
---
language: en
license: mit
library_name: pytorch
tags:
- transformer-decoder
- seq2seq
- embeddings
- mpnet
- text-reconstruction
- crypto
pipeline_tag: text-generation
---
### Aparecium Baseline Model Card
#### Summary
- **Task**: Reconstruct natural language posts from token‑level MPNet embeddings (reverse embedding).
- **Focus**: Crypto domain, with equities as auxiliary domain.
- **Checkpoint**: Baseline model trained with a phased schedule and early stopping.
- **Data**: 1.0M synthetic posts (500k crypto + 500k equities), programmatically generated via OpenAI API. No real social‑media content used.
- **Input contract**: token‑level MPNet matrix of shape `(seq_len, 768)`, not a pooled vector.
---
### Intended use
- Research and engineering use for studying reversibility of embedding spaces and for building diagnostics/tools around embedding interpretability.
- Not intended to reconstruct private or sensitive content; reconstruction accuracy depends on embedding fidelity and domain match.
---
### Model architecture
- Encoder side: External; we assume MPNet family encoder (default: `sentence-transformers/all-mpnet-base-v2`) to produce token‑level embeddings.
- Decoder: Transformer decoder consuming the MPNet memory:
- d_model: 768
- Decoder layers: 2
- Attention heads: 8
- FFN dim: 2048
- Token and positional embeddings; GELU activations
- Decoding:
- Supports greedy, sampling, and beam search.
- Optional embedding‑aware rescoring (cosine similarity between the candidate’s re‑embedded sentence and the pooled MPNet target).
- Optional lightweight constraints for hashtag/cashtag/URL continuity.
Recommended inference defaults:
- `num_beams=8`
- `length_penalty_alpha=0.6`
- `lambda_sim=0.6`
- `rescore_every_k=4`, `rescore_top_m=8`
- `beta=10.0`
- `enable_constraints=True`
- `deterministic=True`
---
### Training data and provenance
- 1,000,000 synthetic posts total:
- 500,000 crypto‑domain posts
- 500,000 equities‑domain posts
- All posts were programmatically generated via the OpenAI API (synthetic). No real social‑media content was used.
- Embeddings:
- Token‑level MPNet (default: `sentence-transformers/all-mpnet-base-v2`).
- Cached to SQLite to avoid recomputation and allow resumable training.
---
### Training procedure (baseline regimen)
- Domain emphasis: 80% crypto / 20% equities per training phase.
- Phased training (10% of available chunks per phase), evaluate after each phase:
- In‑sample: small subset from the phase’s chunks
- Out‑of‑sample: small hold‑out from both domains (not seen in the phase)
- Early‑stop condition: stop if out‑of‑sample cosine degrades relative to prior phase.
- Optimizer: AdamW
- Learning rate (baseline finetune): 5e‑5
- Batch size: 16
- Input `max_source_length`: 256
- Target `max_target_length`: 128
- Checkpointing: every 2,000 steps and at phase end.
Notes
- Training used early stopping based on out‑of‑sample cosine.
---
### Evaluation protocol (for the metrics below)
- Sample size: 1,000 examples per domain drawn from cached embedding databases.
- Decode config: `num_beams=8`, `length_penalty_alpha=0.6`, `lambda_sim=0.6`, `rescore_every_k=4`, `rescore_top_m=8`, `beta=10.0`, `enable_constraints=True`, `deterministic=True`.
- Metrics:
- `cosine_mean/median/p10/p90`: cosine between pooled MPNet embedding of generated text and the pooled MPNet target vector (higher is better).
- `score_norm_mean`: length‑penalized language model score (more positive is better; negative values are common for log‑scores).
- `degenerate_pct`: % of clearly degenerate generations (very short/blank/only hashtags).
- `domain_drift_pct`: % of equity‑like terms in crypto outputs (or crypto‑like terms in equities outputs). Heuristic text filter; intended as a rough indicator only.
Results (current `models/baseline` checkpoint)
- Crypto (n=1000)
- cosine_mean: 0.681
- cosine_median: 0.843
- cosine_p10: 0.000
- cosine_p90: 0.984
- score_norm_mean: −1.977
- degenerate_pct: 5.2%
- domain_drift_pct: 0.0%
- Equities (n=1000)
- cosine_mean: 0.778
- cosine_median: 0.901
- cosine_p10: 0.326
- cosine_p90: 0.986
- score_norm_mean: −1.344
- degenerate_pct: 2.2%
- domain_drift_pct: 4.4%
Interpretation
- The model reconstructs many posts with strong embedding alignment (p90 ≈ 0.98 cosine in both domains).
- Equities shows higher average/median cosine and lower degeneracy than crypto, consistent with the auxiliary‑domain role and data characteristics.
- A small fraction of degenerate outputs exists in both domains (crypto ~5.2%, equities ~2.2%).
- Domain drift is minimal from crypto→equities (0.0%) and present at a modest rate from equities→crypto (~4.4%) under the chosen heuristic.
---
### Input contract and usage
- **Input**: MPNet token‑level matrix `(seq_len × 768)` for a single post. Do not pass a pooled vector.
- **Tokenizer/model alignment** matters: use the same MPNet tokenizer/model version that produced the embeddings.
---
### Limitations and responsible use
- Reconstruction is not guaranteed to match the original post text; it optimizes alignment within the MPNet embedding space and LM scoring.
- The model can produce generic or incomplete outputs (see `degenerate_pct`).
- Domain drift can occur depending on decode settings (see `domain_drift_pct`).
- Data are synthetic programmatic generations, not real social‑media posts. Domain semantics may differ from real‑world distributions.
- Do not use for reconstructing sensitive/private content or for attempting to de‑anonymize embedding corpora. This model is a research/diagnostic tool.
---
### Reproducibility (high‑level)
- Prepare embedding caches (not included): build local token‑level MPNet embedding caches for your corpora (e.g., via a data prep script) and store them in your own paths.
- Baseline training: iterative 10% phases, 80:20 (crypto:equities), LR=5e‑5, BS=16, early‑stop on out‑of‑sample cosine degradation.
- Evaluation: 1,000 samples/domain with the decode settings shown above.
- The released checkpoint corresponds to the latest non‑degrading phase under early‑stopping.
---
### License
- Code: MIT (per repository).
- Model weights: same as code unless declared otherwise upon release.
---
### Citation
If you use this model or codebase, please cite the Aparecium project and this baseline report.
|
onnxmodelzoo/resnet152_Opset16
|
onnxmodelzoo
| 2025-09-22T06:05:06Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T06:04:49Z |
---
language: en
license: apache-2.0
model_name: resnet152_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnet10t_Opset16
|
onnxmodelzoo
| 2025-09-22T06:04:16Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T06:04:09Z |
---
language: en
license: apache-2.0
model_name: resnet10t_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnest50d_4s2x40d_Opset16
|
onnxmodelzoo
| 2025-09-22T06:02:15Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T06:02:05Z |
---
language: en
license: apache-2.0
model_name: resnest50d_4s2x40d_Opset16.onnx
tags:
- Computer_Vision
---
|
sssssungjae/qwen2.5-dpo-shi2-gguf
|
sssssungjae
| 2025-09-22T06:01:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:sssssungjae/qwen2.5-dpo-shi2",
"base_model:quantized:sssssungjae/qwen2.5-dpo-shi2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T06:00:48Z |
---
base_model: sssssungjae/qwen2.5-dpo-shi2
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sssssungjae
- **License:** apache-2.0
- **Finetuned from model :** sssssungjae/qwen2.5-dpo-shi2
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
onnxmodelzoo/resmlp_big_24_224_in22ft1k_Opset17
|
onnxmodelzoo
| 2025-09-22T05:57:36Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:57:05Z |
---
language: en
license: apache-2.0
model_name: resmlp_big_24_224_in22ft1k_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resmlp_24_distilled_224_Opset16
|
onnxmodelzoo
| 2025-09-22T05:55:24Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:55:14Z |
---
language: en
license: apache-2.0
model_name: resmlp_24_distilled_224_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/res2next50_Opset16
|
onnxmodelzoo
| 2025-09-22T05:53:30Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:53:21Z |
---
language: en
license: apache-2.0
model_name: res2next50_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/res2net50_26w_4s_Opset18
|
onnxmodelzoo
| 2025-09-22T05:52:14Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:52:05Z |
---
language: en
license: apache-2.0
model_name: res2net50_26w_4s_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/res2net101_26w_4s_Opset17
|
onnxmodelzoo
| 2025-09-22T05:51:01Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:50:47Z |
---
language: en
license: apache-2.0
model_name: res2net101_26w_4s_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/res2net101_26w_4s_Opset16
|
onnxmodelzoo
| 2025-09-22T05:50:47Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:50:32Z |
---
language: en
license: apache-2.0
model_name: res2net101_26w_4s_Opset16.onnx
tags:
- Computer_Vision
---
|
piyazon/ASR-cv-corpus-ug-22-2
|
piyazon
| 2025-09-22T05:50:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-21T06:14:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnxmodelzoo/repvgg_b3g4_Opset16
|
onnxmodelzoo
| 2025-09-22T05:49:50Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:49:30Z |
---
language: en
license: apache-2.0
model_name: repvgg_b3g4_Opset16.onnx
tags:
- Computer_Vision
---
|
vincentcklau/Qwen3-0.6B-Gensyn-Swarm-amphibious_melodic_macaw
|
vincentcklau
| 2025-09-22T05:48:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am amphibious_melodic_macaw",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T07:13:14Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am amphibious_melodic_macaw
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnxmodelzoo/repvgg_b2_Opset18
|
onnxmodelzoo
| 2025-09-22T05:47:12Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:46:52Z |
---
language: en
license: apache-2.0
model_name: repvgg_b2_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/repvgg_b0_Opset17
|
onnxmodelzoo
| 2025-09-22T05:44:25Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:44:18Z |
---
language: en
license: apache-2.0
model_name: repvgg_b0_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/repvgg_a2_Opset17
|
onnxmodelzoo
| 2025-09-22T05:43:59Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:43:49Z |
---
language: en
license: apache-2.0
model_name: repvgg_a2_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetz_d8_Opset17
|
onnxmodelzoo
| 2025-09-22T05:43:04Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:42:56Z |
---
language: en
license: apache-2.0
model_name: regnetz_d8_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetz_d32_Opset17
|
onnxmodelzoo
| 2025-09-22T05:42:27Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:42:18Z |
---
language: en
license: apache-2.0
model_name: regnetz_d32_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetz_c16_Opset16
|
onnxmodelzoo
| 2025-09-22T05:41:58Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:41:51Z |
---
language: en
license: apache-2.0
model_name: regnetz_c16_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetz_b16_Opset16
|
onnxmodelzoo
| 2025-09-22T05:41:30Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:41:23Z |
---
language: en
license: apache-2.0
model_name: regnetz_b16_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnety_160_Opset16
|
onnxmodelzoo
| 2025-09-22T05:39:14Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:38:52Z |
---
language: en
license: apache-2.0
model_name: regnety_160_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnety_040_Opset16
|
onnxmodelzoo
| 2025-09-22T05:37:24Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:37:15Z |
---
language: en
license: apache-2.0
model_name: regnety_040_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnety_016_Opset16
|
onnxmodelzoo
| 2025-09-22T05:36:50Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:36:44Z |
---
language: en
license: apache-2.0
model_name: regnety_016_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_320_Opset17
|
onnxmodelzoo
| 2025-09-22T05:35:36Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:35:15Z |
---
language: en
license: apache-2.0
model_name: regnetx_320_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_160_Opset18
|
onnxmodelzoo
| 2025-09-22T05:34:53Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:34:40Z |
---
language: en
license: apache-2.0
model_name: regnetx_160_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_080_Opset18
|
onnxmodelzoo
| 2025-09-22T05:33:34Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:33:23Z |
---
language: en
license: apache-2.0
model_name: regnetx_080_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_064_Opset18
|
onnxmodelzoo
| 2025-09-22T05:32:58Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:32:50Z |
---
language: en
license: apache-2.0
model_name: regnetx_064_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_040_Opset18
|
onnxmodelzoo
| 2025-09-22T05:32:29Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:32:21Z |
---
language: en
license: apache-2.0
model_name: regnetx_040_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_016_Opset17
|
onnxmodelzoo
| 2025-09-22T05:30:51Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:30:45Z |
---
language: en
license: apache-2.0
model_name: regnetx_016_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_016_Opset16
|
onnxmodelzoo
| 2025-09-22T05:30:45Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:30:40Z |
---
language: en
license: apache-2.0
model_name: regnetx_016_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_004_Opset18
|
onnxmodelzoo
| 2025-09-22T05:30:08Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:30:03Z |
---
language: en
license: apache-2.0
model_name: regnetx_004_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetx_002_Opset16
|
onnxmodelzoo
| 2025-09-22T05:29:44Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:29:40Z |
---
language: en
license: apache-2.0
model_name: regnetx_002_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetv_064_Opset17
|
onnxmodelzoo
| 2025-09-22T05:29:40Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:29:31Z |
---
language: en
license: apache-2.0
model_name: regnetv_064_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnetv_064_Opset16
|
onnxmodelzoo
| 2025-09-22T05:29:30Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:29:21Z |
---
language: en
license: apache-2.0
model_name: regnetv_064_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/regnet_y_32gf_Opset16
|
onnxmodelzoo
| 2025-09-22T05:27:51Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:27:23Z |
---
language: en
license: apache-2.0
model_name: regnet_y_32gf_Opset16.onnx
tags:
- Computer_Vision
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.