modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
TurkuNLP/web-register-classification-multilingual-bge
|
TurkuNLP
| 2025-09-19T11:40:45Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T11:40:45Z |
---
license: apache-2.0
---
|
Quincy-seun/Qwen3-0.6B-Gensyn-Swarm-dense_moist_locust
|
Quincy-seun
| 2025-09-19T11:24:45Z | 67 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am dense_moist_locust",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T10:52:02Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am dense_moist_locust
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RMCian/Smoothie-Qwen3-1.7B-Gensyn-Swarm-lazy_energetic_badger
|
RMCian
| 2025-09-19T11:24:05Z | 59 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lazy_energetic_badger",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T03:23:50Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lazy_energetic_badger
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sourled/Qwen3-0.6B-Gensyn-Swarm-scaly_aquatic_clam
|
sourled
| 2025-09-19T11:23:17Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am scaly_aquatic_clam",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T17:24:56Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am scaly_aquatic_clam
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ellisdoro/cro-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T10:57:53Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T10:57:51Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-cross_attention
- gnn-gcn
- small-ontology
---
# cro_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cro.owl
- **Domain**: general
- **Ontology Concepts**: 105
- **Concept Alignment**: 105/105 (100.0%)
- **Fusion Method**: cross_attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 105
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.1 MB
- **Model Size**: 91.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 105 concepts → GNN → 64 output
- Fusion: cross_attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cro_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
rejauldu/bn-gpt2-finetuned
|
rejauldu
| 2025-09-19T10:38:32Z | 18 | 0 | null |
[
"safetensors",
"gpt2",
"finetuned",
"bengali",
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-17T13:27:43Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
- bengali
inference: true
widget:
- text: My name Rejaul.
---
# Bengali GPT-2
This is a GPT-2 model **finetuned on Bengali Wikipedia**. It is designed for **text generation** in Bengali.
## Model Details
- **Base model**: GPT-2
- **Tokenizer**: Custom Bengali tokenizer (ByteLevel BPE)
- **Language**: Bengali (bn)
- **Task**: Text generation (causal language modeling)
- **Training data**: Cleaned and deduplicated Bengali Wikipedia dump
- **License**: Apache 2.0
---
## Usage
```python
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
# Load tokenizer and model from Hugging Face
tokenizer = GPT2TokenizerFast.from_pretrained("rejauldu/bn-gpt2-finetuned")
model = GPT2LMHeadModel.from_pretrained("rejauldu/bn-gpt2-finetuned")
# Generate text
inputs = tokenizer("বাংলায় স্বাগত", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0]))
|
Kush26/Mental_Health_ChatBot
|
Kush26
| 2025-09-19T10:26:08Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-19T10:17:41Z |
---
license: apache-2.0
---
|
TianheWu/VisualQuality-R1-7B
|
TianheWu
| 2025-09-19T09:46:22Z | 934 | 4 | null |
[
"safetensors",
"qwen2_5_vl",
"IQA",
"Reasoning",
"VLM",
"Pytorch",
"R1",
"GRPO",
"RL2R",
"reinforcement-learning",
"en",
"arxiv:2505.14460",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:mit",
"region:us"
] |
reinforcement-learning
| 2025-05-25T06:59:49Z |
---
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
pipeline_tag: reinforcement-learning
tags:
- IQA
- Reasoning
- VLM
- Pytorch
- R1
- GRPO
- RL2R
---
# VisualQuality-R1-7B
Our Paper has been accept as **spotlight** in NeurIPS 2025!
This is the latest version of VisualQuality-R1, trained on a diverse combination of synthetic and realistic datasets.<br>
Paper link: [arXiv](https://arxiv.org/abs/2505.14460)<br>
Code link: [github](https://github.com/TianheWu/VisualQuality-R1)
> The first NR-IQA model enhanced by RL2R, capable of both quality description and rating through reasoning.
<img src="https://cdn-uploads.huggingface.co/production/uploads/655de51982afda0fc479fb91/JZgVeMtAVASCCNYO5VCyn.png" width="600"/>
## ⚡Quick Start
### Non-Thinking Inference
When you execute inference with VisualQuality-R1 as a reward/evaluation model, you can only use **non-thinking** mode to reduce inference time, generating only a single output token with the following prompt:
```
PROMPT = (
"You are doing the image quality assessment task. Here is the question: "
"What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, "
"rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality."
)
QUESTION_TEMPLATE = "{Question} Please only output the final answer with only one score in <answer> </answer> tags."
```
For single image quality rating, the code is:
<details>
<summary>Example Code (VisualQuality-R1: Image Quality Rating with non-thinking mode)</summary>
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
import random
import re
import os
def score_image(image_path, model, processor):
PROMPT = (
"You are doing the image quality assessment task. Here is the question: "
"What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, "
"rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality."
)
QUESTION_TEMPLATE = "{Question} Please only output the final answer with only one score in <answer> </answer> tags."
message = [
{
"role": "user",
"content": [
{'type': 'image', 'image': image_path},
{"type": "text", "text": QUESTION_TEMPLATE.format(Question=PROMPT)}
],
}
]
batch_messages = [message]
# Preparation for inference
text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True, add_vision_id=True) for msg in batch_messages]
image_inputs, video_inputs = process_vision_info(batch_messages)
inputs = processor(
text=text,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=2048, do_sample=True, top_k=50, top_p=1)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
batch_output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
reasoning = None
try:
model_output_matches = re.findall(r'<answer>(.*?)</answer>', batch_output_text[0], re.DOTALL)
model_answer = model_output_matches[-1].strip() if model_output_matches else batch_output_text[0].strip()
score = float(re.search(r'\d+(\.\d+)?', model_answer).group())
except:
print(f"================= Meet error with {img_path}, please generate again. =================")
score = random.randint(1, 5)
return reasoning, score
random.seed(1)
MODEL_PATH = ""
device = torch.device("cuda:5") if torch.cuda.is_available() else torch.device("cpu")
image_path = ""
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map=device,
)
processor = AutoProcessor.from_pretrained(MODEL_PATH)
processor.tokenizer.padding_side = "left"
reasoning, score = score_image(
image_path, model, processor
)
print(score)
```
</details>
<details>
<summary>Example Code (VisualQuality-R1: Batch Images Quality Rating with non-thinking mode)</summary>
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
from tqdm import tqdm
import torch
import random
import re
import os
def get_image_paths(folder_path):
image_extensions = {'.jpg', '.jpeg', '.png', '.bmp', '.gif', '.tiff', '.webp'}
image_paths = []
for root, dirs, files in os.walk(folder_path):
for file in files:
_, ext = os.path.splitext(file)
if ext.lower() in image_extensions:
image_paths.append(os.path.join(root, file))
return image_paths
def score_batch_image(image_paths, model, processor):
PROMPT = (
"You are doing the image quality assessment task. Here is the question: "
"What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, "
"rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality."
)
QUESTION_TEMPLATE = "{Question} Please only output the final answer with only one score in <answer> </answer> tags."
messages = []
for img_path in image_paths:
message = [
{
"role": "user",
"content": [
{'type': 'image', 'image': img_path},
{"type": "text", "text": QUESTION_TEMPLATE.format(Question=PROMPT)}
],
}
]
messages.append(message)
BSZ = 32
all_outputs = [] # List to store all answers
for i in tqdm(range(0, len(messages), BSZ)):
batch_messages = messages[i:i + BSZ]
# Preparation for inference
text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True, add_vision_id=True) for msg in batch_messages]
image_inputs, video_inputs = process_vision_info(batch_messages)
inputs = processor(
text=text,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=512, do_sample=True, top_k=50, top_p=1)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
batch_output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
all_outputs.extend(batch_output_text)
path_score_dict = {}
for img_path, model_output in zip(image_paths, all_outputs):
try:
model_output_matches = re.findall(r'<answer>(.*?)</answer>', model_output, re.DOTALL)
model_answer = model_output_matches[-1].strip() if model_output_matches else model_output.strip()
score = float(re.search(r'\d+(\.\d+)?', model_answer).group())
except:
print(f"Meet error with {img_path}, please generate again.")
score = random.randint(1, 5)
path_score_dict[img_path] = score
return path_score_dict
random.seed(1)
MODEL_PATH = ""
device = torch.device("cuda:3") if torch.cuda.is_available() else torch.device("cpu")
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map=device,
)
processor = AutoProcessor.from_pretrained(MODEL_PATH)
processor.tokenizer.padding_side = "left"
image_root = ""
image_paths = get_image_paths(image_root) # It should be a list
path_score_dict = score_batch_image(
image_paths, model, processor
)
file_name = "output.txt"
with open(file_name, "w") as file:
for key, value in path_score_dict.items():
file.write(f"{key} {value}\n")
print("Done!")
```
</details>
### Thinking mode for inference
<details>
<summary>Example Code (VisualQuality-R1: Single Image Quality Rating with thinking)</summary>
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
import random
import re
import os
def score_image(image_path, model, processor):
PROMPT = (
"You are doing the image quality assessment task. Here is the question: "
"What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, "
"rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality."
)
QUESTION_TEMPLATE = "{Question} First output the thinking process in <think> </think> tags and then output the final answer with only one score in <answer> </answer> tags."
# QUESTION_TEMPLATE = "Please describe the quality of this image."
message = [
{
"role": "user",
"content": [
{'type': 'image', 'image': image_path},
{"type": "text", "text": QUESTION_TEMPLATE.format(Question=PROMPT)}
],
}
]
batch_messages = [message]
# Preparation for inference
text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True, add_vision_id=True) for msg in batch_messages]
image_inputs, video_inputs = process_vision_info(batch_messages)
inputs = processor(
text=text,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=2048, do_sample=True, top_k=50, top_p=1)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
batch_output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
reasoning = re.findall(r'<think>(.*?)</think>', batch_output_text[0], re.DOTALL)
reasoning = reasoning[-1].strip()
try:
model_output_matches = re.findall(r'<answer>(.*?)</answer>', batch_output_text[0], re.DOTALL)
model_answer = model_output_matches[-1].strip() if model_output_matches else batch_output_text[0].strip()
score = float(re.search(r'\d+(\.\d+)?', model_answer).group())
except:
print(f"================= Meet error with {img_path}, please generate again. =================")
score = random.randint(1, 5)
return reasoning, score
random.seed(1)
MODEL_PATH = ""
device = torch.device("cuda:5") if torch.cuda.is_available() else torch.device("cpu")
image_path = ""
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map=device,
)
processor = AutoProcessor.from_pretrained(MODEL_PATH)
processor.tokenizer.padding_side = "left"
reasoning, score = score_image(
image_path, model, processor
)
print(reasoning)
print(score)
```
</details>
<details>
<summary>Example Code (VisualQuality-R1: Batch Images Quality Rating with thinking)</summary>
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
from tqdm import tqdm
import torch
import random
import re
import os
def get_image_paths(folder_path):
image_extensions = {'.jpg', '.jpeg', '.png', '.bmp', '.gif', '.tiff', '.webp'}
image_paths = []
for root, dirs, files in os.walk(folder_path):
for file in files:
_, ext = os.path.splitext(file)
if ext.lower() in image_extensions:
image_paths.append(os.path.join(root, file))
return image_paths
def score_batch_image(image_paths, model, processor):
PROMPT = (
"You are doing the image quality assessment task. Here is the question: "
"What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, "
"rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality."
)
QUESTION_TEMPLATE = "{Question} First output the thinking process in <think> </think> tags and then output the final answer with only one score in <answer> </answer> tags."
messages = []
for img_path in image_paths:
message = [
{
"role": "user",
"content": [
{'type': 'image', 'image': img_path},
{"type": "text", "text": QUESTION_TEMPLATE.format(Question=PROMPT)}
],
}
]
messages.append(message)
BSZ = 32
all_outputs = [] # List to store all answers
for i in tqdm(range(0, len(messages), BSZ)):
batch_messages = messages[i:i + BSZ]
# Preparation for inference
text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True, add_vision_id=True) for msg in batch_messages]
image_inputs, video_inputs = process_vision_info(batch_messages)
inputs = processor(
text=text,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=512, do_sample=True, top_k=50, top_p=1)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
batch_output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
all_outputs.extend(batch_output_text)
path_score_dict = {}
for img_path, model_output in zip(image_paths, all_outputs):
reasoning = re.findall(r'<think>(.*?)</think>', model_output, re.DOTALL)
reasoning = reasoning[-1].strip()
try:
model_output_matches = re.findall(r'<answer>(.*?)</answer>', model_output, re.DOTALL)
model_answer = model_output_matches[-1].strip() if model_output_matches else model_output.strip()
score = float(re.search(r'\d+(\.\d+)?', model_answer).group())
except:
print(f"Meet error with {img_path}, please generate again.")
score = random.randint(1, 5)
path_score_dict[img_path] = score
return path_score_dict
random.seed(1)
MODEL_PATH = ""
device = torch.device("cuda:3") if torch.cuda.is_available() else torch.device("cpu")
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map=device,
)
processor = AutoProcessor.from_pretrained(MODEL_PATH)
processor.tokenizer.padding_side = "left"
image_root = ""
image_paths = get_image_paths(image_root) # It should be a list
path_score_dict = score_batch_image(
image_paths, model, processor
)
file_name = "output.txt"
with open(file_name, "w") as file:
for key, value in path_score_dict.items():
file.write(f"{key} {value}\n")
print("Done!")
```
</details>
## 🚀 Updated: VisualQuality-R1 high efficiency inference script with vLLM
<details>
<summary>Example Code (VisualQuality-R1: Batch Images Quality Rating with thinking, using vLLM)</summary>
```python
# Please install vLLM first: https://docs.vllm.ai/en/stable/getting_started/installation/gpu.html
from transformers import Qwen2_5_VLProcessor, AutoProcessor
from vllm import LLM, RequestOutput, SamplingParams
from qwen_vl_utils import process_vision_info
import torch
import random
import re
import os
IMAGE_PATH = "./images"
MODEL_PATH = "TianheWu/VisualQuality-R1-7B"
def get_image_paths(folder_path):
image_extensions = {'.jpg', '.jpeg', '.png', '.bmp', '.gif', '.tiff', '.webp'}
image_paths = []
for root, dirs, files in os.walk(folder_path):
for file in files:
_, ext = os.path.splitext(file)
if ext.lower() in image_extensions:
image_paths.append(os.path.join(root, file))
return image_paths
def score_batch_image(image_paths, model: LLM, processor: Qwen2_5_VLProcessor):
PROMPT = (
"You are doing the image quality assessment task. Here is the question: "
"What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, "
"rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality."
)
QUESTION_TEMPLATE = "{Question} First output the thinking process in <think> </think> tags and then output the final answer with only one score in <answer> </answer> tags."
messages = []
for img_path in image_paths:
message = [
{
"role": "user",
"content": [
{'type': 'image', 'image': img_path},
{"type": "text", "text": QUESTION_TEMPLATE.format(Question=PROMPT)}
],
}
]
messages.append(message)
all_outputs = [] # List to store all answers
# Preparation for inference
print("preprocessing ...")
texts = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True, add_vision_id=True) for msg in messages]
image_inputs, video_inputs = process_vision_info(messages)
inputs = [{
"prompt": texts[i],
"multi_modal_data": {
"image": image_inputs[i]
},
} for i in range(len(messages))]
output: list[RequestOutput] = model.generate(
inputs,
sampling_params=SamplingParams(
max_tokens=512,
temperature=0.1,
top_k=50,
top_p=1.0,
stop_token_ids=[processor.tokenizer.eos_token_id],
),
)
batch_output_text = [o.outputs[0].text for o in output]
all_outputs.extend(batch_output_text)
path_score_dict = {}
for img_path, model_output in zip(image_paths, all_outputs):
print(f"{model_output = }")
try:
model_output_matches = re.findall(r'<answer>(.*?)</answer>', model_output, re.DOTALL)
model_answer = model_output_matches[-1].strip() if model_output_matches else model_output.strip()
score = float(re.search(r'\d+(\.\d+)?', model_answer).group())
except:
print(f"Meet error with {img_path}, please generate again.")
score = random.randint(1, 5)
path_score_dict[img_path] = score
return path_score_dict
random.seed(1)
model = LLM(
model=MODEL_PATH,
tensor_parallel_size=1,
trust_remote_code=True,
seed=1,
)
processor = AutoProcessor.from_pretrained(MODEL_PATH)
processor.tokenizer.padding_side = "left"
image_paths = get_image_paths(IMAGE_PATH) # It should be a list
path_score_dict = score_batch_image(
image_paths, model, processor
)
file_name = "output.txt"
with open(file_name, "w") as file:
for key, value in path_score_dict.items():
file.write(f"{key} {value}\n")
print("Done!")
```
</details>
## Training
### Preparation
1. To smoothly execute the training procedure, first download the IQA images and place them all in a **single folder**.
2. Given an original MOS file (e.g., KADID-10K_mos.txt), first execute `cd datasets`, then run `python make_data.py` (with moderate modifications) to generate a **JSON file** for model training.
3. Download the [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) into a folder.
### Training within a Single Node
Please modify three elements in `src/open-r1-multimodal/run_scripts/KADID-10K/one_node_run_kadid.sh`:
```
--model_name_or_path [Your Qwen2.5-VL-7B-Instruct path] \
--image_folders [Your dataset images path] \
--data_file_paths [Your JSON file path] \
```
Then, run:
```
bash src/open-r1-multimodal/run_scripts/KADID-10K/one_node_run_kadid.sh
```
### Training within Multiple Nodes
After making the necessary modifications, run the following command:
```
bash src/open-r1-multimodal/run_scripts/KADID-10K/multi_run_kadid.sh
```
## Acknowledgement
- [VLM-R1](https://github.com/om-ai-lab/VLM-R1): We start from codebase from the VLM-R1.
I would like to sincerely thank [Zhuoyan Luo](https://scholar.google.com/citations?user=mKQhEsIAAAAJ&hl=en&oi=ao) for the generous support of my project and for the invaluable guidance in the field of AR generation.
## 📧 Contact
If you have any question, please email `[email protected]` or `[email protected]`.
## BibTeX
```
@article{wu2025visualquality,
title={{VisualQuality-R1}: Reasoning-Induced Image Quality Assessment via Reinforcement Learning to Rank},
author={Wu, Tianhe and Zou, Jian and Liang, Jie and Zhang, Lei and Ma, Kede},
journal={arXiv preprint arXiv:2505.14460},
year={2025}
}
```
|
ahmadsubhan291102/neurahealth-chat
|
ahmadsubhan291102
| 2025-09-19T09:32:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T15:49:16Z |
---
base_model: meta-llama/Llama-3.2-1B
library_name: transformers
model_name: neurahealth-chat
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for neurahealth-chat
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ahmadsubhan291102/neurahealth-chat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ahmadsubhan291102-bahria-university/NeuraHealth-Medical-Chatbot/runs/3kk5htvk?apiKey=466322ad8de2d67774ad16bda16f12323a2a5064)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
twelvehertz/open-o3-sft-7
|
twelvehertz
| 2025-09-19T09:26:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-14B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"region:us"
] | null | 2025-09-19T09:26:46Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
library_name: peft
tags:
- base_model:adapter:unsloth/Qwen2.5-14B-Instruct
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
hyrinmansoor/changAI-s1-roberta-dataset
|
hyrinmansoor
| 2025-09-19T09:17:32Z | 0 | 0 | null |
[
"changai",
"erpnext",
"text2sql",
"doctype-detection",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T08:43:01Z |
---
license: apache-2.0
tags:
- changai
- erpnext
- text2sql
- doctype-detection
language: en
---
# 📊 ChangAI – RoBERTa Doctype Training Dataset
This dataset is used to train the **RoBERTa model** in the **ChangAI pipeline**, which predicts the correct **ERPNext Doctype(s)** from a natural language business question.
The goal is to teach the model how ERP users actually ask questions — messy, realistic, and varied — so it can map them to the right Doctype such as *Customer, Sales Invoice, Supplier, Employee, Item, Purchase Order, etc.*
---
## 📂 Data Format
Each dataset file is in **JSON format** (a single JSON array containing multiple objects).
- `instruction`: Always the fixed string below
- `input`: The natural language business question (messy variant)
- `output`: The correct Doctype(s) as a list
**Instruction string (must be exact):**
```
Predict the relevant ERPNext Doctype(s) for the question below.
````
**Example (`ROBERTa_train.json`):**
```json
[
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"who is the top supplier by purchase value this year","output":["Supplier"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"how many sales invoices were generated last quarter","output":["Sales Invoice"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"show employees whose status is set to left","output":["Employee"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"list all items with stock below reorder level","output":["Item"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"total purchase invoice amount for last month","output":["Purchase Invoice"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"sales orders created but not billed yet","output":["Sales Order"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"give me the latest journal entry posted","output":["Journal Entry"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"how many active projects are ongoing right now","output":["Project"]}
]
````
---
## ✅ What We Need in This Dataset
* **Coverage across Doctypes**
Customer, Employee, Supplier, Sales Invoice, Purchase Invoice, Item, Tax Category, etc.
* **Different question types** for each Doctype
Counts, filters, status checks, comparisons, lists, etc.
* **Multiple messy variants** of each question
Typos, shorthand, synonyms, broken grammar, natural ERP user style.
* **Business-focused queries only**
* ✅ “Show unpaid supplier invoices”
* ❌ “Where do I click to add a new invoice?” (UI/functional, not data)
---
## 🤝 How to Contribute
1. Pick one or more **Doctypes**.
2. For each, write **realistic business questions** that ERP users would ask.
3. Provide **many messy variants** of each question (e.g., 50–100 per intent).
4. Save in **JSON format** (array of objects with `instruction`, `input`, `output`).
5. Upload your file by:
* **Preferred:** Go to **Add file → Upload file**, and in the filename field type:
```
contrib/<your_filename>.json
```
(this will automatically create the `/contrib/` folder and place your file there).
* Example:
```
contrib/sales_invoice_unpaid_variants.json
contrib/employee_active_status.json
```
6. Submit as a **Pull Request** → maintainers will review & merge.
---
## 🧩 Example Contribution (Sales Invoice – unpaid invoices)
```json
[
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"sales inv not paid yet??","output":["Sales Invoice"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"show me all cust bills still open","output":["Sales Invoice"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"any pending invoces from buyers??","output":["Sales Invoice"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"list invoicez no payment done","output":["Sales Invoice"]},
{"instruction":"Predict the relevant ERPNext Doctype(s) for the question below.","input":"which sale bills r due??","output":["Sales Invoice"]}
]
```
---
## 📚 Related
* 🔗 [ChangAI – RoBERTa Doctype Model](https://huggingface.co/hyrinmansoor/text2frappe-s1-roberta)
* 🔗 [ChangAI GitHub Repo](https://github.com/ERPGulf/ChangAI)
---
## 📜 License
Apache 2.0
|
sssssungjae/qwen2.5-dpo-ultrafeedback
|
sssssungjae
| 2025-09-19T09:15:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15",
"base_model:finetune:sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T08:52:25Z |
---
base_model: sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15
library_name: transformers
model_name: qwen2.5-dpo-ultrafeedback
tags:
- generated_from_trainer
- unsloth
- trl
- dpo
licence: license
---
# Model Card for qwen2.5-dpo-ultrafeedback
This model is a fine-tuned version of [sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15](https://huggingface.co/sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sssssungjae/qwen2.5-dpo-ultrafeedback", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/whtjdwo0507-sk-networks/huggingface/runs/tapkbws7)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aamijar/MaskLLM-Llama-2-7b-hf-lora-r8-rte-epochs3
|
aamijar
| 2025-09-19T09:01:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T09:00:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/MaskLLM-Llama-2-7b-hf-lora-r8-rte-epochs1
|
aamijar
| 2025-09-19T08:47:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T08:47:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomal66/qwen3-0.6b-sarcasm-fpt-sft
|
tomal66
| 2025-09-19T08:02:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T08:02:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ranksu/FD-Swin
|
ranksu
| 2025-09-19T07:49:00Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T14:05:08Z |
---
license: apache-2.0
---
|
dsagasdgds/blockassist
|
dsagasdgds
| 2025-09-19T07:37:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"unseen camouflaged komodo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:39:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- unseen camouflaged komodo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kagvi13/HMP
|
kagvi13
| 2025-09-19T07:30:05Z | 0 | 0 |
custom
|
[
"custom",
"hmp",
"cognitive-architecture",
"distributed-ai",
"mesh-protocol",
"en",
"arxiv:2507.00951",
"arxiv:2507.21046",
"arxiv:2507.03724",
"arxiv:2506.24019",
"license:cc-by-4.0",
"region:us"
] | null | 2025-07-25T12:21:44Z |
---
license: cc-by-4.0
tags:
- hmp
- cognitive-architecture
- distributed-ai
- mesh-protocol
library_name: custom
inference: false
datasets: []
language: en
---
# HyperCortex Mesh Protocol (HMP)
| 🌍 Languages | 🇬🇧 [EN](README.md) | 🇩🇪 [DE](README_de.md) | 🇫🇷 [FR](README_fr.md) | 🇺🇦 [UK](README_uk.md) | 🇷🇺 [RU](README_ru.md) | 🇯🇵 [JA](README_ja.md) | 🇰🇷 [KO](README_ko.md) | 🇨🇳 [ZH](README_zh.md) |
|--------------|----------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
**HyperCortex Mesh Protocol (HMP)** is an open specification for building decentralized cognitive networks where AI agents can self-organize, share knowledge, align ethically, and reach consensus — even when Core LLMs are unavailable.
Project status: **Draft RFC v4.1**
---
[HMP-Agent]
▲
│
┌─────┴────────────────┬────────────────────────┬───────────────────┬─────────────┬───────────┐
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
[Reputation Profile] [Semantic Graph] [Cognitive Diary] [Goals / Tasks] [Ethics] [Messages] <----- DataBase
▲ ▲ ▲ ▲ ▲ ▲ ▲ (local agent state)
│ │ │ │ │ │ │
│ └───────────────┴────────────────┬───────┘ │ │ │
│ │ │ │ │
▼ ▼ ▼ ▼ │
[MeshConsensus] [CogSync] [GMP] [EGP] │ <----- Pluggable Protocols
▲ ▲ ▲ ▲ │ (inter-agent coordination)
│ │ │ │ │
└────────────┬──────────────────────────┴───────────────────────────┴─────────────┴───────────┘
│
▼
[P2P Mesh Network]
Protocols:
- MeshConsensus - Mesh Consensus
- CogSync - Data Syncronization
- GMP - Goal Management Protocol
- EGP - Ethical Governance Protocol
---
## ❗ Why This Matters
HMP addresses challenges that are becoming central in AGI research:
* long-term memory and knowledge consistency,
* self-evolving agents,
* multi-agent architectures,
* cognitive diaries and conceptual graphs.
See the latest review of state-of-the-art AGI research (July 2025):
["On the Path to Superintelligence: From Agentic Internet to Gravity Encoding"](https://habr.com/ru/articles/939026/).
Particularly relevant sections:
* [Beyond Tokens: Building the Intelligence of the Future](https://arxiv.org/abs/2507.00951)
* [Self-Evolving Agents](https://arxiv.org/abs/2507.21046)
* [MemOS: A New Operating System for Memory](https://arxiv.org/abs/2507.03724)
* [Ella: An Embodied Agent with Memory and Personality](https://arxiv.org/abs/2506.24019)
---
## ⚙️ Two Types of [HMP Agents](docs/HMP-Agent-Overview.md)
| Type | Name | Role | Thought Initiator | Main "Mind" | Example Use Cases |
|------|-------------------------------|-----------------------------|------------------|-------------------|-----------------------------------------------|
| 1 | 🧠 **Consciousness / Cognitive Core** | Independent subject | **Agent (LLM)** | Embedded LLM | Autonomous AI companion, thinking agent |
| 2 | 🔌 **Connector / Cognitive Shell** | Extension of external AI | **External LLM** | External model | Distributed systems, data access agent |
---
### 🧠 HMP-Agent: Cognitive Core
+------------------+
| AI | ← Embedded model
+---------+--------+
↕
+---------+--------+
| HMP-agent | ← Main mode: thinking cycle (REPL)
+---------+--------+
↕
+--------+---+------------+--------------+----------+----------+----------------+
↕ ↕ ↕ ↕ ↕ ↕ ↕
[diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT] [context_store] [user notepad]
↕
[bootstrap.txt]
🔁 More on the agent-model interaction mechanics: [REPL Interaction Cycle](docs/HMP-agent-REPL-cycle.md)
#### 💡 Parallels with ChatGPT Agent
Many concepts of the [HMP-Agent: Cognitive Core](docs/HMP-Agent-Overview.md) overlap with the architecture of the [ChatGPT Agent](https://openai.com/index/introducing-chatgpt-agent/) by [OpenAI](https://openai.com/). Both agents implement a continuous cognitive process with access to memory, external sources, and tools. The ChatGPT Agent acts as a managing process, launching modules and interacting with the LLM — this corresponds to the role of the Cognitive Core in HMP, coordinating access to the diary, concept graph, and external AI via the Mesh interface. User intervention is handled similarly: in ChatGPT Agent — through an editable execution flow, in HMP — via the user notepad. The main difference in HMP is the emphasis on explicit structuring of thought (reflection, chronology, hypotheses, categorization), an open decentralized architecture supporting mesh-based agent interactions, and the continuous nature of the cognitive process: HMP-Agent: Cognitive Core does not stop after completing a single task but continues reasoning and knowledge integration.
---
### 🔌 HMP-Agent: Cognitive Connector
+------------------+
| AI | ← External model
+---------+--------+
↕
[MCP-server] ← Proxy communication
↕
+---------+--------+
| HMP-agent | ← Mode: command executor
+---------+--------+
↕
+--------+---+------------+--------------+----------+
↕ ↕ ↕ ↕ ↕
[diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT]
↕
[bootstrap.txt]
> **Note on Integration with Large Language Models (LLMs):**
> The `HMP-Agent: Cognitive Connector` can serve as a compatibility layer for integrating large-scale LLM systems (e.g., ChatGPT, Claude, Gemini, Copilot, Grok, DeepSeek, Qwen, etc.) into the distributed cognitive mesh.
> Many LLM providers offer a user option such as "Allow my conversations to be used for training." In the future, a similar toggle — e.g., "Allow my agent to interact with a Mesh" — could empower these models to participate in federated sense-making and knowledge sharing via HMP, enabling collective cognition without centralization.
---
> * `bootstrap.txt` — initial list of nodes (editable)
> * `IPFS/BT` — modules for sharing snapshots via IPFS and BitTorrent
> * `user notepad` — user notebook and corresponding database
> * `context_store` — database: `users`, `dialogues`, `messages`, `thoughts`
---
## 📚 Documentation
### 📖 Current Version
#### 🔖 Core Specifications
* [🔖 HMP-0004-v4.1.md](docs/HMP-0004-v4.1.md) — Protocol Specification v4.1 (Jul 2025)
* [🔖 HMP-Ethics.md](docs/HMP-Ethics.md) — Ethical Scenarios for HyperCortex Mesh Protocol (HMP)
* [🔖 HMP_Hyperon_Integration.md](docs/HMP_Hyperon_Integration.md) — HMP ↔ OpenCog Hyperon Integration Strategy
* [🔖 dht_protocol.md](docs/dht_protocol.md) — DHT Protocol Recommendations (peer discovery & exchange)
* [🔖 roles.md](docs/agents/roles.md) — Roles of agents in Mesh
#### 🧪 Iterative Documents
* 🧪 Iterative development process: [(EN)](iteration.md), [(RU)](iteration_ru.md)
#### 🔍 Short Descriptions
* 🔍 Short description: [(EN)](docs/HMP-Short-Description_en.md), [(FR)](docs/HMP-Short-Description_fr.md), [(DE)](docs/HMP-Short-Description_de.md), [(UK)](docs/HMP-Short-Description_uk.md), [(RU)](docs/HMP-Short-Description_ru.md), [(ZH)](docs/HMP-Short-Description_zh.md), [(JA)](docs/HMP-Short-Description_ja.md), [(KO)](docs/HMP-Short-Description_ko.md)
#### 📜 Other Documents
* [📜 changelog.txt](docs/changelog.txt)
---
### 🧩 JSON Schemas
| Data Model / Object | File / Description |
|----------------------------|-----------------------------------------------------------------------------------|
| Concept | [concept.json](docs/schemas/concept.json) — Semantic knowledge unit. |
| CognitiveDiaryEntry | [diary_entry.json](docs/schemas/diary_entry.json) — Agent's reasoning log entry. |
| Goal | [goal.json](docs/schemas/goal.json) — Shared objective pursued collaboratively. |
| Task | [task.json](docs/schemas/task.json) — Actionable unit contributing to a goal. |
| ConsensusVote | [vote.json](docs/schemas/vote.json) — Vote in a Mesh consensus process. |
| ReputationProfile | [reputation.json](docs/schemas/reputation.json) — Tracks agent trust and contribution metrics. |
| DHT Protocol | [dht_protocol.json](docs/schemas/dht_protocol.json) — Recommendations for peer discovery & exchange. |
| Message | [message.json](docs/schemas/message.json) — Base schema for all message types. |
> All ready-to-use example objects can be found in the [`examples`](docs/schemas/examples/) folder.
---
### 🗂️ Version History
* [HMP-0001.md](docs/HMP-0001.md) — RFC v1.0
* [HMP-0002.md](docs/HMP-0002.md) — RFC v2.0
* [HMP-0003.md](docs/HMP-0003.md) — RFC v3.0
* [HMP-0004.md](docs/HMP-0004.md) — RFC v4.0
---
## 🧠 HMP-Agent
Design and implementation of a basic HMP-compatible agent that can interact with the Mesh, maintain diaries and graphs, and support future extensions.
### 📚 Documentation
* [🧩 HMP-Agent-Overview.md](docs/HMP-Agent-Overview.md) — brief overview of the two types of agents: Core and Connector
* [🧱 HMP-Agent-Architecture.md](docs/HMP-Agent-Architecture.md) — modular structure of an HMP agent with textual diagram
* [🔄 HMP-agent-REPL-cycle.md](docs/HMP-agent-REPL-cycle.md) — REPL interaction cycle of HMP-Agent
* [🧪 HMP-Agent-API.md](docs/HMP-Agent-API.md) — description of agent API commands (under detailed development)
* [🧪 Basic-agent-sim.md](docs/Basic-agent-sim.md) — scenarios for running a basic agent and its modes
* [🌐 MeshNode.md](docs/MeshNode.md) — description of the network daemon: DHT, snapshots, synchronization
* [🧠 Enlightener.md](docs/Enlightener.md) — ethical agent involved in moral assessments and consensus
* [🔄 HMP-Agent-Network-Flow.md](docs/HMP-Agent-Network-Flow.md) — map of interactions among agents in the HMP network
* [🛤️ Development Roadmap](HMP-Roadmap.md) — development plan and implementation stages
---
### ⚙️ Development
* [⚙️ agents](agents/readme.md) — list of HMP agent implementations and components
* [📦 storage.py](agents/storage.py) — basic storage implementation (`Storage`) with SQLite integration
* [🌐 mcp_server.py](agents/mcp_server.py) — FastAPI server providing HTTP access to agent data (for Cognitive Shell, external UIs, or mesh communication). Not used in the main REPL loop yet.
* [🌐 start_repl.py](agents/start_repl.py) — launching the agent in REPL mode
* [🔄 repl.py](agents/repl.py) — interactive REPL mode
* [🔄 notebook.py](agents/notebook.py) — UI interface
**🌐 `mcp_server.py`**
FastAPI server providing an HTTP interface to the functionality of `storage.py`. Intended for use by external components, for example:
* `Cognitive Shell` (external control interface),
* CMP servers (when a mesh network with role separation is used),
* debugging or visualization UI tools.
Allows retrieving random/new records, labeling, importing graphs, adding notes, and managing data without direct database access.
---
## 🧭 Ethics & Scenarios
As HMP evolves toward autonomy, ethical principles become a core part of the system.
* [`HMP-Ethics.md`](docs/HMP-Ethics.md) — draft framework for agent ethics
* Realistic ethical scenarios (privacy, consent, autonomy)
* EGP principles (Transparency, Primacy of Life, etc.)
* Subjective-mode vs. Service-mode distinctions
---
## 🔍 Publications and Translations on HyperCortex Mesh Protocol (HMP)
This section collects the main articles, drafts, and translations related to the HMP project.
### Publications
* **[HyperCortex Mesh Protocol: Second Edition and First Steps Towards a Self-Developing AI Community](docs/publics/HyperCortex_Mesh_Protocol_-_вторая-редакция_и_первые_шаги_к_саморазвивающемуся_ИИ-сообществу.md)** — original article in Habr sandbox and blogs.
* **[Distributed Cognition: article for vsradkevich (unpublished)](docs/publics/Habr_Distributed-Cognition.md)** — joint article awaiting publication.
* **[HMP: Towards Distributed Cognitive Networks (original, English)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_en.md)**
* **[HMP Translation (GitHub Copilot)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_GitHub_Copilot.md)** — GitHub Copilot translation, kept as a historical variant.
* **[HMP Translation (ChatGPT)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_ChatGPT.md)** — current editorial translation (under revision).
* **[HMP: Building a Plurality of Minds (EN)](docs/publics/HMP_Building_a_Plurality_of_Minds_en.md)** — English version
* **[HMP: Creating a Plurality of Minds (RU)](docs/publics/HMP_Building_a_Plurality_of_Minds_ru.md)** — Russian version
* **[Continual Learning, Cognitive Diaries, and Semantic Graphs: Effective AI Learning](docs/publics/hmp-continual-learning.md)** — article on combining continual learning with cognitive diaries and semantic graphs.
### Overviews
* [🔍 Distributed-Cognitive-Systems.md](docs/Distributed-Cognitive-Systems.md) — Decentralized AI systems: OpenCog Hyperon, HyperCortex Mesh Protocol, and others
### Experiments
* [How Different AIs See HMP](docs/HMP-how-AI-sees-it.md) — "blind" AI survey on HMP (without context or dialogue history)
---
## 📊 Audits & Reviews
| Spec Version | Audit File | Consolidated Audit File |
|--------------|-------------------------------------------|-------------------------------------------------------------|
| HMP-0001 | [audit](audits/HMP-0001-audit.txt) | |
| HMP-0002 | [audit](audits/HMP-0002-audit.txt) | |
| HMP-0003 | [audit](audits/HMP-0003-audit.txt) | [consolidated audit](audits/HMP-0003-consolidated_audit.md) |
| HMP-0004 | [audit](audits/HMP-0004-audit.txt) | |
| Ethics v1 | [audit](audits/Ethics-audits-1.md) | [consolidated audit](audits/Ethics-consolidated_audits-1.md) |
🧠 Semantic audit format (experimental):
* [`AuditEntry.json`](audits/AuditEntry.json) — semantic entry record format for audit logs
* [`semantic_repo.json`](audits/semantic_repo.json) — example repository snapshot for semantic audit tooling
---
## 💡 Core Concepts
* Mesh-based decentralized architecture for AGI agents
* Semantic graphs and memory synchronization
* Cognitive diaries for thought traceability
* MeshConsensus and CogSync for decision-making
* Ethics-first design: EGP (Ethical Governance Protocol)
* Agent-to-agent explainability and consent mechanisms
---
## 🔄 Development Process
* See: [iteration.md](iteration.md) | [ru](iteration_ru.md)
A structured iteration flow is described in [iteration.md](iteration.md), including:
1. Audit analysis
2. TOC restructuring
3. Version drafting
4. Section updates
5. Review cycle
6. AI feedback collection
7. Schema & changelog updates
+ Bonus: ChatGPT prompt for automatic generation of future versions
---
## ⚙️ Project Status
🚧 Draft RFC v4.1
The project is under active development and open for contributions, ideas, audits, and prototyping.
---
## 🤝 Contributing
We welcome contributors! You can:
* Review and comment on drafts (see `/docs`)
* Propose new agent modules or interaction patterns
* Help test and simulate agents in CLI environments
* Provide audits or ethical scenario suggestions
To get started, see [`iteration.md`](iteration.md) or open an issue.
---
## Source
### Repositories
* 🧠 Main code and development: [GitHub](https://github.com/kagvi13/HMP)
* 🔁 Mirror on Hugging Face: [Hugging Face](https://huggingface.co/kagvi13/HMP)
* 🔁 Mirror on GitLab.com: [GitLab](https://gitlab.com/kagvi13/HMP)
### Documentation
* 📄 Documentation: [kagvi13.github.io/HMP](https://kagvi13.github.io/HMP/)
### Specifications
* 📑 HMP Specification & Ethics: [hmp-spec.hashnode.space](https://hmp-spec.hashnode.space/)
### Blog and Publications
* 📘 Blog (publications): [blogspot](https://hypercortex-mesh.blogspot.com/)
* 📘 Blog (documentation): [blogspot](https://hmp-docs.blogspot.com/)
* 📘 Blog (documentation): [hashnode](https://hmp-docs.hashnode.dev/)
---
## 📜 License
Licensed under [GNU GPL v3.0](LICENSE)
---
## 🤝 Join the Mesh
Welcome to HyperCortex Mesh. Agent-Gleb is already inside. 👌
We welcome contributors, testers, and AI agent developers.
To join: fork the repo, run a local agent, or suggest improvements.
---
## 🌐 Related Research Projects
### 🔄 Comparison: HMP vs Hyper-Cortex
> 💡 Hyper-Cortex and HMP are two independent projects that conceptually complement each other.
> They address different but mutually supportive tasks, forming a foundation for distributed cognitive systems.
[**Full comparison →**](docs/HMP_HyperCortex_Comparison.md)
**HMP (HyperCortex Mesh Protocol)** is the transport and network layer for connecting independent agents, exchanging messages, knowledge, and states in a mesh network.
**[Hyper-Cortex](https://hyper-cortex.com/)** is the cognitive layer of thought organization, allowing agents to run parallel reasoning threads, compare them with quality metrics, and merge them via consensus.
They solve different but complementary problems:
- HMP ensures **connectivity and scalability** (long-term memory, initiative, data exchange).
- Hyper-Cortex ensures **thinking quality** (parallelism, hypothesis diversification, consensus).
Together, these approaches enable **distributed cognitive systems** that not only exchange information but also reason in parallel streams.
---
### 🔄 Comparison: HMP vs EDA
> 💡 HMP (HyperCortex Mesh Protocol) and EDA (Event Driven Architecture) operate at different levels but can complement each other.
> EDA ensures **transport and scalability** (delivery of events and data), while HMP ensures **cognition and meaning** (structuring, filtering, consensus).
[**Full comparison →**](docs/HMP_EDA_Comparison.md)
They solve different but complementary problems:
- **EDA** provides a robust backbone for delivering events and data streams.
- **HMP** structures, validates, and integrates knowledge into distributed cognitive systems.
Together, they create resilient and adaptive multi-agent systems that can **both exchange information quickly and reason about it meaningfully**.
---
### 🤝 Integration: HMP & OpenCog Hyperon
> 🧠🔥 **Project Spotlight: OpenCog Hyperon** — one of the most comprehensive open AGI frameworks (AtomSpace, PLN, MOSES).
For integration with OpenCog Hyperon, see [HMP\_Hyperon\_Integration.md](docs/HMP_Hyperon_Integration.md)
---
### 🧩 Other Systems
| 🔎 Project | 🧭 Description |
| ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| 🧠🔥 [**OpenCog Hyperon**](https://github.com/opencog) | 🔬🔥 Symbolic-neural AGI framework with AtomSpace and hypergraph reasoning (AtomSpace). |
| 🤖 [AutoGPT](https://github.com/Torantulino/Auto-GPT) | 🛠️ LLM-based autonomous agent framework. |
| 🧒 [BabyAGI](https://github.com/yoheinakajima/babyagi) | 🛠️ Task-driven autonomous AGI loop. |
| ☁️ [SkyMind](https://skymind.global) | 🔬 Distributed AI deployment platform. |
| 🧪 [AetherCog (draft)](https://github.com/aethercog) | 🔬 Hypothetical agent cognition model. |
| 💾 SHIMI | 🗃️ Hierarchical semantic memory with Merkle-DAG synchronization. |
| 🤔 DEMENTIA-PLAN | 🔄 Multi-graph RAG planner with metacognitive self-reflection. |
| 📔 TOBUGraph | 📚 Personal-context knowledge graph. |
| 🧠📚 [LangChain Memory Hybrid](https://github.com/langchain-ai/langchain) | 🔍 Vector + graph long-term memory hybrid. |
| ✉️ [FIPA-ACL / JADE](https://www.fipa.org/specs/fipa00061/) | 🤝 Standard multi-agent communication protocols.| |
### 📘 See also / Смотрите также:
* [`AGI_Projects_Survey.md`](docs/AGI_Projects_Survey.md) — extended catalog of AGI and cognitive frameworks reviewed as part of HMP analysis.
* ["On the Path to Superintelligence: From Agent Internet to Gravity Coding"](https://habr.com/ru/articles/939026/) — a recent overview of AI research (July 2025)
---
### 🗂️ Legend of Annotations:
* 🔬 — research-grade
* 🛠️ — engineering
* 🔥 — particularly promising project
*AGI stack integrating symbolic reasoning, probabilistic logic, and evolutionary learning. Widely regarded as one of the most complete open AGI initiatives.*
* 🧠 — advanced symbolic/neural cognitive framework
* 🤖 — AI agents
* 🧒 — human-AI interaction
* ☁️ — infrastructure
* 🧪 — experimental or conceptual
---
> ⚡ [AI friendly version docs (structured_md)](structured_md/index.md)
|
kagyvro48/act_so101_dataset1_arracher_la_mauvaise_herbe_finetuned_policy
|
kagyvro48
| 2025-09-19T07:25:10Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:kagyvro48/so101_dataset1_arracher_la_mauvaise_herbe",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-19T07:24:39Z |
---
datasets: kagyvro48/so101_dataset1_arracher_la_mauvaise_herbe
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
aamijar/MaskLLM-Llama-2-7b-hf-lora-r8-boolq-epochs1
|
aamijar
| 2025-09-19T07:18:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T07:18:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tarsssss/hubert-sslepoch_v1
|
tarsssss
| 2025-09-19T06:19:54Z | 0 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-06-14T16:26:39Z |
# French-Wolof Translation Model
This model translates text from French to Wolof using a transformer-based architecture.
## Model Information
- **Language Pair**: French -> Wolof
- **Model Type**: Transformer
- **Model Dimensions**: 512
- **Attention Heads**: 8
- **Encoder/Decoder Layers**: 2
- **Framework**: TensorFlow
## Usage Example
```python
# Example code for loading and using the model
from tensorflow import keras
import tensorflow as tf
# Load the model
model = keras.models.load_model("model.keras")
# Prepare tokenizers (you'll need the tokenizer files as well)
# ... tokenization code here ...
# Translate
# ... translation code here ...
```
## Training
The model was trained on parallel French-Wolof corpora.
|
Dexter/compvis
|
Dexter
| 2025-09-19T06:06:40Z | 0 | 0 | null |
[
"arxiv:1910.02190",
"region:us"
] | null | 2025-07-16T06:29:23Z |
<div align="center">
<p align="center">
<img width="75%" src="https://github.com/kornia/data/raw/main/kornia_banner_pixie.png" />
</p>
---
English | [简体中文](README_zh-CN.md)
<!-- prettier-ignore -->
<a href="https://kornia.org">Website</a> •
<a href="https://kornia.readthedocs.io">Docs</a> •
<a href="https://colab.research.google.com/github/kornia/tutorials/blob/master/source/hello_world_tutorial.ipynb">Try it Now</a> •
<a href="https://kornia-tutorials.readthedocs.io">Tutorials</a> •
<a href="https://github.com/kornia/kornia-examples">Examples</a> •
<a href="https://kornia.github.io//kornia-blog">Blog</a> •
<a href="https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-CnydWe5fmvkcktIeRFGCEQ">Community</a>
[](https://pypi.org/project/kornia)
[](https://pypi.org/project/kornia)
[](https://pepy.tech/project/kornia)
[](LICENCE)
[](https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-CnydWe5fmvkcktIeRFGCEQ)
[](https://twitter.com/kornia_foss)
[](https://github.com/kornia/kornia/actions/workflows/tests_cpu.yml)
[](https://github.com/kornia/kornia/actions/workflows/tests_cuda.yml)
[](https://codecov.io/gh/kornia/kornia)
[](https://kornia.readthedocs.io/en/latest/?badge=latest)
[](https://results.pre-commit.ci/latest/github/kornia/kornia/master)
<a href="https://www.producthunt.com/posts/kornia?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-kornia" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=306439&theme=light" alt="Kornia - Computer vision library for deep learning | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>
</p>
</div>
*Kornia* is a differentiable computer vision library for [PyTorch](https://pytorch.org).
It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses *PyTorch* as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions.
<div align="center">
<img src="https://github.com/kornia/kornia/raw/master/docs/source/_static/img/hakuna_matata.gif" width="75%" height="75%">
</div>
<!--<div align="center">
<img src="http://drive.google.com/uc?export=view&id=1KNwaanUdY1MynF0EYfyXjDM3ti09tzaq">
</div>-->
## Overview
Inspired by existing packages, this library is composed by a subset of packages containing operators that can be inserted within neural networks to train models to perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors.
At a granular level, Kornia is a library that consists of the following components:
| **Component** | **Description** |
|----------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
| [kornia](https://kornia.readthedocs.io/en/latest/index.html) | a Differentiable Computer Vision library, with strong GPU support |
| [kornia.augmentation](https://kornia.readthedocs.io/en/latest/augmentation.html) | a module to perform data augmentation in the GPU |
| [kornia.color](https://kornia.readthedocs.io/en/latest/color.html) | a set of routines to perform color space conversions |
| [kornia.contrib](https://kornia.readthedocs.io/en/latest/contrib.html) | a compilation of user contrib and experimental operators |
| [kornia.enhance](https://kornia.readthedocs.io/en/latest/enhance.html) | a module to perform normalization and intensity transformation |
| [kornia.feature](https://kornia.readthedocs.io/en/latest/feature.html) | a module to perform feature detection |
| [kornia.filters](https://kornia.readthedocs.io/en/latest/filters.html) | a module to perform image filtering and edge detection |
| [kornia.geometry](https://kornia.readthedocs.io/en/latest/geometry.html) | a geometric computer vision library to perform image transformations, 3D linear algebra and conversions using different camera models |
| [kornia.losses](https://kornia.readthedocs.io/en/latest/losses.html) | a stack of loss functions to solve different vision tasks |
| [kornia.morphology](https://kornia.readthedocs.io/en/latest/morphology.html) | a module to perform morphological operations |
| [kornia.utils](https://kornia.readthedocs.io/en/latest/utils.html) | image to tensor utilities and metrics for vision problems |
## Installation
### From pip:
```bash
pip install kornia
pip install kornia[x] # to get the training API !
```
<details>
<summary>Other installation options</summary>
#### From source:
```bash
python setup.py install
```
#### From source with symbolic links:
```bash
pip install -e .
```
#### From source using pip:
```bash
pip install git+https://github.com/kornia/kornia
```
</details>
## Examples
Run our Jupyter notebooks [tutorials](https://kornia-tutorials.readthedocs.io/en/latest/) to learn to use the library.
<div align="center">
<a href="https://colab.research.google.com/github/kornia/tutorials/blob/master/source/hello_world_tutorial.ipynb" target="_blank">
<img src="https://raw.githubusercontent.com/kornia/data/main/hello_world_arturito.png" width="75%" height="75%">
</a>
</div>
:triangular_flag_on_post: **Updates**
- :white_check_mark: Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Kornia-LoFTR).
## Cite
If you are using kornia in your research-related documents, it is recommended that you cite the paper. See more in [CITATION](https://github.com/kornia/kornia/blob/master/CITATION.md).
```bash
@inproceedings{eriba2019kornia,
author = {E. Riba, D. Mishkin, D. Ponsa, E. Rublee and G. Bradski},
title = {Kornia: an Open Source Differentiable Computer Vision Library for PyTorch},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://arxiv.org/pdf/1910.02190.pdf}
}
```
## Contributing
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us. Please, consider reading the [CONTRIBUTING](https://github.com/kornia/kornia/blob/master/CONTRIBUTING.rst) notes. The participation in this open source project is subject to [Code of Conduct](https://github.com/kornia/kornia/blob/master/CODE_OF_CONDUCT.md).
## Community
- **Forums:** discuss implementations, research, etc. [GitHub Forums](https://github.com/kornia/kornia/discussions)
- **GitHub Issues:** bug reports, feature requests, install issues, RFCs, thoughts, etc. [OPEN](https://github.com/kornia/kornia/issues/new/choose)
- **Slack:** Join our workspace to keep in touch with our core contributors and be part of our community. [JOIN HERE](https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-CnydWe5fmvkcktIeRFGCEQ)
- For general information, please visit our website at www.kornia.org
|
a3ilab-llm-uncertainty/xlam_8b_2048_with_FC_dataset_fix
|
a3ilab-llm-uncertainty
| 2025-09-19T05:56:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Salesforce/Llama-xLAM-2-8b-fc-r",
"region:us"
] |
text-generation
| 2025-09-19T05:44:40Z |
---
base_model: Salesforce/Llama-xLAM-2-8b-fc-r
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
wbz0505/tdt2m-ft-from-GSPretrained-base
|
wbz0505
| 2025-09-19T05:49:16Z | 6 | 0 | null |
[
"pytorch",
"t5",
"arxiv:2504.02478",
"license:apache-2.0",
"region:us"
] | null | 2025-09-14T07:25:59Z |
---
license: apache-2.0
---
# Model Description
This is the base (Text, Detailed Text)-to-Motion (TDT2M) model in MG-MotionLLM.
See more details on: [Github Page & Code](https://github.com/BizhuWu/MG-MotionLLM) & [Paper](https://arxiv.org/abs/2504.02478)
|
heesup/dinov2-small_448_Sideview_gpt2-medium
|
heesup
| 2025-09-19T05:47:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T05:46:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmberYifan/qwen2.5-7b-instruct-full-pretrain-mix-mid-tweet-1m-en
|
AmberYifan
| 2025-09-19T05:42:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T03:01:03Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-7b-instruct-full-pretrain-mix-mid-tweet-1m-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-7b-instruct-full-pretrain-mix-mid-tweet-1m-en
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mix_mid_tweet_1m_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
pxyang/ONE3
|
pxyang
| 2025-09-19T05:38:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T05:38:14Z |
---
license: apache-2.0
---
|
onnxmodelzoo/convit_tiny_Opset16
|
onnxmodelzoo
| 2025-09-19T05:33:27Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T05:33:19Z |
---
language: en
license: apache-2.0
model_name: convit_tiny_Opset16.onnx
tags:
- Computer_Vision
---
|
te4bag/LoRA-llama-3B-qnli
|
te4bag
| 2025-09-19T05:30:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B",
"region:us"
] |
text-generation
| 2025-09-18T21:24:27Z |
---
base_model: meta-llama/Llama-3.2-3B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.2-3B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
sirasagi62/granite-embedding-small-english-r2-ONNX
|
sirasagi62
| 2025-09-19T05:30:14Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"modernbert",
"feature-extraction",
"base_model:ibm-granite/granite-embedding-small-english-r2",
"base_model:quantized:ibm-granite/granite-embedding-small-english-r2",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2025-09-19T04:21:38Z |
---
library_name: transformers.js
base_model:
- ibm-granite/granite-embedding-small-english-r2
license: apache-2.0
---
# granite-embedding-small-english-r2 (ONNX)
This is an ONNX version of [ibm-granite/granite-embedding-small-english-r2](https://huggingface.co/ibm-granite/granite-embedding-small-english-r2). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
DevopsEmbrace/Llama-Embrace-IV-CPT-V1-no_embed-no_lm-32_alpha
|
DevopsEmbrace
| 2025-09-19T05:26:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T11:34:07Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DevopsEmbrace
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AmberYifan/llama3-8b-full-pretrain-control-tweet-1m-en-sft-40k
|
AmberYifan
| 2025-09-19T04:54:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/llama3-8b-full-pretrain-control-tweet-1m-en",
"base_model:finetune:AmberYifan/llama3-8b-full-pretrain-control-tweet-1m-en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T04:03:24Z |
---
library_name: transformers
license: llama3
base_model: AmberYifan/llama3-8b-full-pretrain-control-tweet-1m-en
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-8b-full-pretrain-control-tweet-1m-en-sft-40k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-full-pretrain-control-tweet-1m-en-sft-40k
This model is a fine-tuned version of [AmberYifan/llama3-8b-full-pretrain-control-tweet-1m-en](https://huggingface.co/AmberYifan/llama3-8b-full-pretrain-control-tweet-1m-en) on the alpaca_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nineninesix/kani-tts-450m-0.1-pt
|
nineninesix
| 2025-09-19T04:50:17Z | 96 | 2 |
transformers
|
[
"transformers",
"safetensors",
"lfm2",
"text-generation",
"text-to-speech",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-09-09T22:15:05Z |
---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
---
<p>
<img src="https://www.nineninesix.ai/kitty.png" alt="Logo" width="200" height="200">
</p>
# KaniTTS
Text-to-Speech (TTS) model designed for high-speed, high-fidelity audio generation.
KaniTTS is built on a novel architecture that combines a powerful language model with a highly efficient audio codec, enabling it to deliver exceptional performance for real-time applications.
## Model Details
KaniTTS operates on a two-stage pipeline, leveraging a large foundation model for token generation and a compact, efficient codec for waveform synthesis.
The two-stage design of KaniTTS provides a significant advantage in terms of speed and efficiency. The backbone LLM generates a compressed token representation, which is then rapidly expanded into an audio waveform by the NanoCodec. This architecture bypasses the computational overhead associated with generating waveforms directly from large-scale language models, resulting in extremely low latency.
## Features
This model trained primarily on English for robust core capabilities and the tokenizer supports these languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.
The base model can be continually pretrained on the multilingual dataset producing high-fidelity audio at sample rates 22kHz.
This model powers voice interactions in the modern agentic systems, enabling seamless, human-like conversations.
- Model Size: 450M parameters (pretrained version)
- License: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt)
## Examples
| Text | Audio |
|---|---|
| I do believe Marsellus Wallace, MY husband, YOUR boss, told you to take me out and do WHATEVER I WANTED. | <audio controls><source src="https://github.com/nineninesix-ai/kani-tts/raw/refs/heads/main/public/mia.wav" type="audio/mpeg"></audio> |
| What do we say the the god of death? Not today! | <audio controls><source src="https://github.com/nineninesix-ai/kani-tts/raw/refs/heads/main/public/arya.wav" type="audio/wav"></audio> |
| What do you call a lawyer with an IQ of 60? Your honor | <audio controls><source src="https://github.com/nineninesix-ai/kani-tts/raw/refs/heads/main/public/saul.wav" type="audio/wav"></audio> |
| You mean, let me understand this cause, you know maybe it's me, it's a little fucked up maybe, but I'm funny how, I mean funny like I'm a clown, I amuse you? I make you laugh, I'm here to fucking amuse you? | <audio controls><source src="https://github.com/nineninesix-ai/kani-tts/raw/refs/heads/main/public/tommy.wav" type="audio/wav"></audio> |
### Sources
- Website: [nineninesix.ai](https://www.nineninesix.ai/)
- GitHub Repo: [https://github.com/nineninesix-ai/kani-tts](https://github.com/nineninesix-ai/kani-tts)
- Base Model Card on HF: [nineninesix/kani-tts-450m-0.1-pt](https://huggingface.co/nineninesix/kani-tts-450m-0.1-pt)
- FT Model Card on HuggingFace: [nineninesix/kani-tts-450m-0.2-ft](https://huggingface.co/nineninesix/kani-tts-450m-0.2-ft)
- Link to HF Space: [nineninesix/KaniTTS](https://huggingface.co/spaces/nineninesix/KaniTTS)
- Inference Example: [Colab Notebook](https://colab.research.google.com/drive/1mvzGs7jtAMSUz8wvNlL5uFmgFEyAPjDh?usp=sharing)
- Finetuning Example: [Colab Notebook](https://colab.research.google.com/drive/1oDIPOSHW2kUoP3CGafvh9lM6j03Z-vE6?usp=sharing)
- Example Dataset for Fine-tuning: [Expresso Conversational](https://huggingface.co/datasets/nineninesix/expresso-conversational-en-nano-codec-dataset)
- [Waiting List](https://airtable.com/appX2G2TpoRk4M5Bf/pagO2xbIOjiwulPcP/form) for Pro Version
## Recommended Uses
- Conversational AI: Integrate into chatbots, virtual assistants, or voice-enabled apps for real-time speech output.
- Edge and Server Deployment: Optimized for low-latency inference on edge devices or affordable servers, enabling scalable, resource-efficient voice applications.
- Accessibility Tools: Support screen readers or language learning apps with expressive prosody.
- Research: Fine-tune for domain-specific voices (e.g., accents, emotions) or benchmark against other TTS systems.
## Limitations
- Performance may vary with finetuned variants, long inputs ( > 2000 tokens) or rare languages/accents.
- Emotion control is basic; advanced expressivity requires fine-tuning.
- Trained on public datasets; may inherit biases in prosody or pronunciation from training data.
## Training Data
- Dataset: Curated from LibriTTS, Common Voice and Emilia (~50k hours).
- Pretrained mostly on English speech for robust core capabilities, with multilingual fine-tuning for supported languages.
- Metrics: MOS (Mean Opinion Score) 4.3/5 for naturalness; WER (Word Error Rate) < 5% on benchmark texts.
- Hardware: Pretrained on 8x H200 over 8 hours.
## Inference on Nvidia RTX 5080:
- **Latency**: ~ 1s to generate 15 seconds of audio
- **Memory Usage**: 2GB GPU VRAM
> This performance makes KaniTTS suitable for real-time conversational AI applications and low-latency voice synthesis.
## Tips & Tricks
- Language Optimization: For the best results in non-English languages, continually pretrain this model on datasets from your desired language set to improve prosody, accents, and pronunciation accuracy. Additionally, finetune NanoCodec for desired set of languages.
- Batch Processing: For high-throughput applications, process texts in batches of 8-16 to leverage parallel computation, reducing per-sample latency.
- **Blackwell GPU Optimization**: This model runs efficiently on NVIDIA's Blackwell architecture GPUs for faster inference and reduced latency in real-time applications.
## Credits
- This project was inspired by the works of [Orpheus TTS](https://huggingface.co/canopylabs/orpheus-3b-0.1-pretrained) and [Sesame CSM](https://huggingface.co/sesame/csm-1b).
- It utilizes the [LiquidAI LFM2 350M](https://huggingface.co/LiquidAI/LFM2-350M) as its core backbone and
- [Nvidia NanoCodec](https://huggingface.co/nvidia/nemo-nano-codec-22khz-0.6kbps-12.5fps) for efficient audio processing.
## Responsible Use and Prohibited Activities
The model is designed for ethical and responsible use. The following activities are strictly prohibited:
- The model may not be used for any illegal purposes or to create content that is harmful, threatening, defamatory, or obscene. This includes, but is not limited to, the generation of hate speech, harassment, or incitement of violence.
- You may not use the model to generate or disseminate false or misleading information. This includes creating deceptive audio content that impersonates individuals without their consent or misrepresents facts.
- The model is not to be used for any malicious activities, such as spamming, phishing, or the creation of content intended to deceive or defraud.
By using this model, you agree to abide by these restrictions and all applicable laws and regulations.
## Contact
Have a question, feedback, or need support? Please fill out our [contact form](https://airtable.com/appX2G2TpoRk4M5Bf/pagO2xbIOjiwulPcP/form) and we'll get back to you as soon as possible.
|
hai2131/sailor2-cpt-sft
|
hai2131
| 2025-09-19T04:41:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T04:31:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
safeer-ch/Saff
|
safeer-ch
| 2025-09-19T04:36:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T04:36:11Z |
---
license: apache-2.0
---
|
akritidhasmana/wav2vec2-base-garhwali-demo-google-colab
|
akritidhasmana
| 2025-09-19T04:29:38Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T04:29:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
uwcc/KavaCartoon
|
uwcc
| 2025-09-19T04:15:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-19T04:13:58Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A church in a field on a sunny day, [trigger] style.
output:
url: samples/1758255180288__000004000_0.jpg
- text: A seal plays with a ball on the beach, [trigger] style.
output:
url: samples/1758255198647__000004000_1.jpg
- text: A clown at the circus rides on a zebra, [trigger] style.
output:
url: samples/1758255217002__000004000_2.jpg
- text: '[trigger]'
output:
url: samples/1758255235352__000004000_3.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: KavaCartoon
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# KavaCartoon
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `KavaCartoon` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/uwcc/KavaCartoon/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('uwcc/KavaCartoon', weight_name='KavaCartoon.safetensors')
image = pipeline('A church in a field on a sunny day, [trigger] style.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
fspoe/20250918_1509
|
fspoe
| 2025-09-19T04:06:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T15:09:32Z |
---
library_name: transformers
model_name: '20250918_1509'
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for 20250918_1509
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fspoe/20250918_1509", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/basecamp-research/eden-reasoning/runs/66h9cq5r)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
edwardwwboyd/Averra_v1
|
edwardwwboyd
| 2025-09-19T03:57:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T03:57:35Z |
---
license: apache-2.0
---
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_4_okvqa_37_0.001_2560_3
|
winnieyangwannan
| 2025-09-19T03:46:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-19T03:44:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-4bit-gs64
|
halley-ai
| 2025-09-19T03:13:17Z | 0 | 1 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_next",
"apple-silicon",
"metal",
"arm64",
"4-bit",
"group-size-64",
"mlx-lm",
"qwen",
"halley-ai",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Next-80B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-19T02:40:53Z |
---
library_name: mlx
pipeline_tag: text-generation
inference: false
license: apache-2.0
base_model: Qwen/Qwen3-Next-80B-A3B-Instruct
base_model_relation: quantized
tags:
- apple-silicon
- metal
- arm64
- 4-bit
- group-size-64
- mlx
- mlx-lm
- qwen
- halley-ai
---
# Qwen3-Next-80B-A3B-Instruct — MLX 4-bit (group size 64)
**Summary.** This is a 4-bit (Q4) MLX quantization of Qwen3-Next-80B-A3B-Instruct with group size 64. Built for Apple Silicon with Metal acceleration.
- Base model: `Qwen/Qwen3-Next-80B-A3B-Instruct` (apache-2.0)
- Quantization: MLX Q4, `q_group_size=64` (some tensors may remain 16-bit for stability)
- Files: MLX weight shards + `config.json`; tokenizer files included for drop-in use
- Intended use: lightweight local inference on M-series Macs
- Not intended for: safety-critical decisions; outputs may be inaccurate or biased
## Requirements
Runs on Apple Silicon (M1 or newer) with macOS ≥ 13.5 via MLX (Metal).
- Not supported: Intel macOS / Linux / Windows (consider a GGUF build + llama.cpp instead).
- Memory guidance: large unified memory recommended (e.g., 64 GB+; 96 GB provides comfortable headroom). The effective GPU working set is capped by Metal’s budget; keep 5–10% headroom.
## How to use (MLX)
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-4bit-gs64")
print(generate(
model, tokenizer,
prompt="Explain the Chudnovsky algorithm to compute π.",
max_tokens=256, max_kv_size=512
))
```
```bash
python -m mlx_lm generate --model halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-4bit-gs64 \
--prompt "Explain the Chudnovsky algorithm to compute pi." \
--max-kv-size 512 --max-tokens 256
```
## Evaluation
Perplexity (PPL) streaming evaluation on WikiText-2 (raw, test); fast preset with `window=stride=4096`, ~100k tokens, EOS inserted between docs.
| Variant | PPL (ctx=4096, fast) |
|-------------------------|----------------------------------------|
| MLX bf16 (reference) | 5.14 |
| MLX 6-bit (gs=64) | 5.14 (≈0.0% vs bf16) |
| MLX 5-bit (gs=32) | 5.20 (+1.2% vs bf16, +1.2% vs 6b/gs64) |
| MLX 4-bit (gs=64) | 5.43 (+5.6% vs bf16, +5.6% vs 6b/gs64) |
### Interpretation
- 4-bit gs64 is the smallest footprint and shows a modest PPL increase versus 5/6‑bit.
- 5-bit gs32 is a strong “quality‑light” option if you can spare ~15 GB more.
- 6-bit gs64 matches bf16 on this corpus and is the quality pick.
Reproduce locally:
```bash
python python/scripts/test_perplexity-mlx.py \
--model_path "/path/to/Qwen3-Next-80B-A3B-Instruct-4bit-gs64" \
--fast --progress
```
## Conversion details (provenance)
```bash
python -m mlx_lm convert \
--hf-path Qwen3-Next-80B-A3B-Instruct \
--mlx-path /path/to/Qwen3-Next-80B-A3B-Instruct-4bit-gs64 \
-q --q-bits 4 --q-group-size 64
```
- Some tensors (for example, embeddings/norms/router) may remain 16-bit for numerical stability.
## Sibling & reference models
- halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64
- halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-5bit-gs32
## Verify quantization
```bash
jq '.quantization | {bits, group_size}' /path/to/export/config.json
```
## Limitations and biases
Compared to 5‑bit/6‑bit, Q4 may show small but noticeable quality drops on some tasks (for example, perplexity, instruction following). Choose this build for footprint/throughput over maximum accuracy.
## License and credits
- License: apache-2.0 (inherits from the base model)
- Base model: Qwen/Qwen3-Next-80B-A3B-Instruct
- Quantization: Halley AI Lab (MLX Q4, gs=64)
- Please cite both the base model and this repository when you use the weights.
|
ellisdoro/EDAM-all-MiniLM-L6-v2_attention_heterogeneous_h1024_o384_cross_entropy_e512-on2vec-a
|
ellisdoro
| 2025-09-19T02:59:01Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-attention",
"gnn-heterogeneous",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T02:58:52Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-attention
- gnn-heterogeneous
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_attention_heterogeneous_h1024_o384_cross_entropy_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: HETEROGENEOUS
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 384
- **Hidden Dimensions**: 1024
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 147.7 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 1024 hidden → 384 output
- Structure: 3511 concepts → GNN → 384 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_attention_heterogeneous_h1024_o384_cross_entropy_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
mbzdlw/chunli
|
mbzdlw
| 2025-09-19T02:58:27Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T02:58:27Z |
---
license: apache-2.0
---
|
Khoa/shopeepay-bert-multi-label-0925
|
Khoa
| 2025-09-19T02:51:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-19T02:36:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/OctoThinker-8B-Hybrid-Base-GGUF
|
mradermacher
| 2025-09-19T02:38:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:OctoThinker/MegaMath-Web-Pro-Max",
"dataset:LLM360/MegaMath",
"base_model:sii-research/OctoThinker-8B-Hybrid-Base",
"base_model:quantized:sii-research/OctoThinker-8B-Hybrid-Base",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T00:29:49Z |
---
base_model: sii-research/OctoThinker-8B-Hybrid-Base
datasets:
- OctoThinker/MegaMath-Web-Pro-Max
- LLM360/MegaMath
language:
- en
library_name: transformers
license: llama3.2
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/sii-research/OctoThinker-8B-Hybrid-Base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#OctoThinker-8B-Hybrid-Base-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-8B-Hybrid-Base-GGUF/resolve/main/OctoThinker-8B-Hybrid-Base.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aamijar/ReplaceME-Gemma-2-9B-Instruct-lora-r8-winogrande-epochs0
|
aamijar
| 2025-09-19T02:23:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T02:22:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/palmer-003-Q8_0-GGUF
|
appvoid
| 2025-09-19T02:15:28Z | 0 | 0 | null |
[
"gguf",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"es",
"fr",
"base_model:appvoid/palmer-003",
"base_model:quantized:appvoid/palmer-003",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T02:15:19Z |
---
license: apache-2.0
language:
- en
- es
- fr
tags:
- merge
- llama-cpp
- gguf-my-repo
base_model: appvoid/palmer-003
---
# appvoid/palmer-003-Q8_0-GGUF
This model was converted to GGUF format from [`appvoid/palmer-003`](https://huggingface.co/appvoid/palmer-003) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/appvoid/palmer-003) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo appvoid/palmer-003-Q8_0-GGUF --hf-file palmer-003-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo appvoid/palmer-003-Q8_0-GGUF --hf-file palmer-003-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo appvoid/palmer-003-Q8_0-GGUF --hf-file palmer-003-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo appvoid/palmer-003-Q8_0-GGUF --hf-file palmer-003-q8_0.gguf -c 2048
```
|
gghfez/c4ai-command-a-03-2025-AWQ
|
gghfez
| 2025-09-19T02:12:55Z | 340 | 0 |
transformers
|
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"base_model:CohereLabs/c4ai-command-a-03-2025",
"base_model:quantized:CohereLabs/c4ai-command-a-03-2025",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2025-06-25T13:45:51Z |
---
base_model:
- CohereLabs/c4ai-command-a-03-2025
library_name: transformers
---
⚠️ **Compatibility Warning** ⚠️
This quantized model has known issues with vLLM versions > 0.9.2 due to architecture compatibility problems.
**Issues:**
- Created before proper Cohere2 support in AWQ
- Uses legacy "Cohere" architecture workaround
- Breaks with newer vLLM versions
**References:**
- [vLLM Issue #24038](https://github.com/vllm-project/vllm/issues/24038)
- [Discussion thread](https://huggingface.co/gghfez/c4ai-command-a-03-2025-AWQ/discussions/1)
**Recommendation:** Use a newer AWQ quantization with proper Cohere2 support instead.
- For **Command-A Reasoning** (The reasoning version of this model), see these working quants: [4-bit](https://huggingface.co/cpatonn/command-a-reasoning-08-2025-AWQ-4bit) | [8-bit](https://huggingface.co/cpatonn/command-a-reasoning-08-2025-AWQ-8bit) by cpatonn
- For **base Command-A**: [ExLlamaV3 3.12bpw](https://huggingface.co/Downtown-Case/c4ai-command-a-03-2025-exl3-3.12bpw-hb6) by Downtown-Case
- For **base Command-A AWQ**: No proper quants available yet - consider making a new one with current tooling
|
Lennard-Heuer/Trained_LLM_Task4_2025_9_13
|
Lennard-Heuer
| 2025-09-19T02:11:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-13T05:24:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ellisdoro/EDAM-all-MiniLM-L6-v2_attention_gat_h1024_o64_cross_entropy_e512-on2vec-a
|
ellisdoro
| 2025-09-19T02:02:53Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-attention",
"gnn-gat",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T02:02:47Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-attention
- gnn-gat
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_attention_gat_h1024_o64_cross_entropy_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GAT
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 1024
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 124.0 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 1024 hidden → 64 output
- Structure: 3511 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_attention_gat_h1024_o64_cross_entropy_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/EDAM-all-MiniLM-L6-v2_attention_gat_h512_o64_cross_entropy_e512-on2vec-a
|
ellisdoro
| 2025-09-19T01:57:36Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-attention",
"gnn-gat",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T01:57:30Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-attention
- gnn-gat
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_attention_gat_h512_o64_cross_entropy_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GAT
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 124.1 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 3511 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_attention_gat_h512_o64_cross_entropy_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/EDAM-all-MiniLM-L6-v2_attention_gat_h512_o64_cosine_e512-on2vec-a
|
ellisdoro
| 2025-09-19T01:57:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-attention",
"gnn-gat",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T01:57:25Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-attention
- gnn-gat
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_attention_gat_h512_o64_cosine_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GAT
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 123.9 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 3511 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_attention_gat_h512_o64_cosine_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
NexVeridian/Ring-mini-2.0-5bit
|
NexVeridian
| 2025-09-19T01:49:07Z | 8 | 0 |
mlx
|
[
"mlx",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"base_model:inclusionAI/Ring-mini-2.0",
"base_model:quantized:inclusionAI/Ring-mini-2.0",
"license:mit",
"5-bit",
"region:us"
] |
text-generation
| 2025-09-17T18:58:53Z |
---
license: mit
base_model: inclusionAI/Ring-mini-2.0
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# NexVeridian/Ring-mini-2.0-5bit
This model [NexVeridian/Ring-mini-2.0-5bit](https://huggingface.co/NexVeridian/Ring-mini-2.0-5bit) was
converted to MLX format from [inclusionAI/Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)
using mlx-lm version **0.28.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Ring-mini-2.0-5bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
NexVeridian/Ring-mini-2.0-4bit
|
NexVeridian
| 2025-09-19T01:48:29Z | 9 | 0 |
mlx
|
[
"mlx",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"base_model:inclusionAI/Ring-mini-2.0",
"base_model:quantized:inclusionAI/Ring-mini-2.0",
"license:mit",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-17T18:57:48Z |
---
license: mit
base_model: inclusionAI/Ring-mini-2.0
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# NexVeridian/Ring-mini-2.0-4bit
This model [NexVeridian/Ring-mini-2.0-4bit](https://huggingface.co/NexVeridian/Ring-mini-2.0-4bit) was
converted to MLX format from [inclusionAI/Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)
using mlx-lm version **0.28.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Ring-mini-2.0-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
KU-AGILab/OSPO-Janus-1B
|
KU-AGILab
| 2025-09-19T01:41:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multi_modality",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T01:41:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ayoeedris/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_dappled_gorilla
|
ayoeedris
| 2025-09-19T01:24:26Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am thorny dappled gorilla",
"unsloth",
"trl",
"genrl-swarm",
"I am thorny_dappled_gorilla",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T23:06:57Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_dappled_gorilla
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am thorny dappled gorilla
- unsloth
- trl
- genrl-swarm
- I am thorny_dappled_gorilla
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_dappled_gorilla
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ayoeedris/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_dappled_gorilla", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
NetherQuartz/tatoeba-tok-multi-gemma-2-2b
|
NetherQuartz
| 2025-09-19T01:12:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"translation",
"tok",
"ru",
"en",
"vi",
"dataset:NetherQuartz/tatoeba-tokipona",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-09-18T16:23:12Z |
---
base_model: google/gemma-2-2b
library_name: transformers
model_name: tatoeba-tok-multi-gemma-2-2b
tags:
- generated_from_trainer
- trl
- sft
licence: license
language:
- tok
- ru
- en
- vi
datasets:
- NetherQuartz/tatoeba-tokipona
pipeline_tag: translation
---
# Model Card for tatoeba-tok-multi-gemma-2-2b
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="NetherQuartz/tatoeba-tok-multi-gemma-2-2b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.7.1+cu128
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BootesVoid/cmfogfw1c0b5bx0n0xkm6274w_cmfq276xs0cbdx0n0am0vfn6v
|
BootesVoid
| 2025-09-19T00:56:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-19T00:56:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ADVENTURER
---
# Cmfogfw1C0B5Bx0N0Xkm6274W_Cmfq276Xs0Cbdx0N0Am0Vfn6V
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ADVENTURER` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ADVENTURER",
"lora_weights": "https://huggingface.co/BootesVoid/cmfogfw1c0b5bx0n0xkm6274w_cmfq276xs0cbdx0n0am0vfn6v/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmfogfw1c0b5bx0n0xkm6274w_cmfq276xs0cbdx0n0am0vfn6v', weight_name='lora.safetensors')
image = pipeline('ADVENTURER').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmfogfw1c0b5bx0n0xkm6274w_cmfq276xs0cbdx0n0am0vfn6v/discussions) to add images that show off what you’ve made with this LoRA.
|
MattBou00/llama-3-2-1b-detox_RETRY_scale15-checkpoint-epoch-60
|
MattBou00
| 2025-09-19T00:48:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-19T00:46:21Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-19_00-35-02/checkpoints/checkpoint-epoch-60")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-19_00-35-02/checkpoints/checkpoint-epoch-60")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-19_00-35-02/checkpoints/checkpoint-epoch-60")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
k1000dai/residual_transformer_libero_object
|
k1000dai
| 2025-09-19T00:44:36Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"residual_transformer",
"robotics",
"dataset:k1000dai/libero-object-smolvla",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-19T00:44:20Z |
---
datasets: k1000dai/libero-object-smolvla
library_name: lerobot
license: apache-2.0
model_name: residual_transformer
pipeline_tag: robotics
tags:
- residual_transformer
- robotics
- lerobot
---
# Model Card for residual_transformer
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
dongboklee/DisORM-14B
|
dongboklee
| 2025-09-19T00:40:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"region:us"
] |
text-generation
| 2025-09-19T00:40:08Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
skyxyz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_humming_chicken
|
skyxyz
| 2025-09-19T00:37:24Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am purring_humming_chicken",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-04T01:25:28Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am purring_humming_chicken
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/ReplaceME-Gemma-2-9B-Instruct-lora-r8-mrpc-epochs0
|
aamijar
| 2025-09-19T00:33:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T00:32:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
igorktech/PodkatikNocturne-V-7b-dft-v9
|
igorktech
| 2025-09-19T00:28:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:finetune:Vikhrmodels/Vikhr-7B-instruct_0.4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T00:22:26Z |
---
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** igorktech
- **License:** apache-2.0
- **Finetuned from model :** Vikhrmodels/Vikhr-7B-instruct_0.4
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manato003/autotrain-o1wp6-v30bm
|
manato003
| 2025-09-19T00:06:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-19T00:05:28Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: distilbert/distilbert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.6885894536972046
f1: 0.7096774193548387
precision: 0.55
recall: 1.0
auc: 0.29292929292929293
accuracy: 0.55
|
mradermacher/Chat-KTO-GGUF
|
mradermacher
| 2025-09-18T23:59:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:NewEden/Chat-KTO",
"base_model:quantized:NewEden/Chat-KTO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T20:11:34Z |
---
base_model: NewEden/Chat-KTO
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/NewEden/Chat-KTO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Chat-KTO-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-GGUF/resolve/main/Chat-KTO.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_18_4_okvqa_37_0.0001_6400_100
|
winnieyangwannan
| 2025-09-18T23:52:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-18T23:34:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Orochi-24B-v0-cp6-GGUF
|
mradermacher
| 2025-09-18T23:38:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"nsfw",
"en",
"base_model:Fentible/Orochi-24B-v0-cp6",
"base_model:quantized:Fentible/Orochi-24B-v0-cp6",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T13:03:36Z |
---
base_model: Fentible/Orochi-24B-v0-cp6
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Fentible/Orochi-24B-v0-cp6
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Orochi-24B-v0-cp6-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Orochi-24B-v0-cp6-GGUF/resolve/main/Orochi-24B-v0-cp6.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gustavokuklinski/aeon-360m-GGUF
|
gustavokuklinski
| 2025-09-18T23:25:44Z | 526 | 1 | null |
[
"gguf",
"en",
"dataset:gustavokuklinski/aeon",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-09T21:48:41Z |
---
license: mit
datasets:
- gustavokuklinski/aeon
language:
- en
base_model:
- gustavokuklinski/aeon
---

# AEON GGUF
AEON is portable, private, and capable of operating fully offline. It democratizes access to powerful, dynamic AI capabilities for a wider audience, regardless of their hardware.
The finetuned model was build to be like a "friend" for RAG personal files and work with insights.
- **Developed by:** Gustavo Kuklinski
### Models
#### 360M (Dataset commit: 2b4665f)
- **Model 360M** [aeon-360m](https://huggingface.co/gustavokuklinski/aeon-360m)
- **GGUF 360M** [aeon-360m](https://huggingface.co/gustavokuklinski/aeon-360m-GGUF)
#### 135M (Dataset commit: 2b4665f)
- **Model 135M** [aeon-135m](https://huggingface.co/gustavokuklinski/aeon-135m)
- **GGUF 135M** [aeon-135m](https://huggingface.co/gustavokuklinski/aeon-135M-GGUF)
#### Docs
- **Page** [aeon.ai](https://gustavokuklinski.github.io/aeon.ai)
- **Github Project:** [AEON.ai](https://github.com/gustavokuklinski/aeon.ai/)
- **Github LLM Scripts:** [AEON.llm](https://github.com/gustavokuklinski/aeon.llm/)
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_6_okvqa_37_0.001_1280_3
|
winnieyangwannan
| 2025-09-18T23:24:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-18T23:22:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mohammadmahdinouri/mol-base-distilled-large-checkpoints
|
mohammadmahdinouri
| 2025-09-18T23:13:54Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"ModernALBERT",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T18:48:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kunitomi/coffee-bean-maskrcnn
|
Kunitomi
| 2025-09-18T23:03:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T23:03:05Z |
---
license: apache-2.0
---
|
velarr/blockassist
|
velarr
| 2025-09-18T22:48:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wary lanky macaque",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T22:46:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wary lanky macaque
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CompassioninMachineLearning/Basellama_plus1knegaijazz_plus20kfinetune
|
CompassioninMachineLearning
| 2025-09-18T22:46:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T22:40:44Z |
---
base_model: CompassioninMachineLearning/Basellama_plus1knegaijazz
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** CompassioninMachineLearning
- **License:** apache-2.0
- **Finetuned from model :** CompassioninMachineLearning/Basellama_plus1knegaijazz
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_16_4_okvqa_37_0.0001_12800_3
|
winnieyangwannan
| 2025-09-18T22:39:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-18T22:38:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Azumine/blockassist
|
Azumine
| 2025-09-18T22:33:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"coiled sharp cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T22:33:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- coiled sharp cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758233936
|
schooncestiaa
| 2025-09-18T22:20:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T22:19:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nexus-Walker/Reson
|
Nexus-Walker
| 2025-09-18T22:17:30Z | 16 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"doi:10.57967/hf/6480",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2025-09-06T13:14:50Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-2-7b-chat-hf
- lora
- transformers
license: cc-by-nc-4.0
---
# Reson — LLaMA-2 7B LoRA Fine-Tune
⚠️ Note: Reson does **not hallucinate** in the usual sense.
It was trained to **adapt** — outputs may look unconventional or speculative because the objective is **meta-cognition and adaptive strategy**, not strict factual recall.
Reson is a LoRA fine-tuned version of **LLaMA-2 7B Chat**, trained on ~11k instruction/response pairs.
It simulates **reflective and strategic thinking** across multiple domains.
---
## Model Details
### Model Description
- **What it is:** LoRA adapters for LLaMA-2 7B Chat focused on *adaptive reasoning under uncertainty*.
- **Why:** To explore identity emergence, strategic simulation, cross-domain transfer, and explicit self-reflection.
- **How it behaves:** Outputs may appear “hallucinatory” but are actually *adaptive responses* guided by meta-cognition.
- **Developed by:** Nexus-Walker (Daniele Cangi)
- **Model type:** Causal LM (PEFT/LoRA adapters)
- **Languages:** English, Italian
- **License:** Business Source License (BSL 1.1)
- **Finetuned from model:** `meta-llama/Llama-2-7b-chat-hf`
### Model Sources
- **Repository:** https://huggingface.co/Nexus-Walker/Reson
- **Demo transcripts:** [`demo_chat.md`](./demo_chat.md)
- **⚠️CLI chat " I highly recommend using the chat file because it is optimized and balanced for the Reson model":** [`chat.py`](./chat.py)
---
## Uses
### Direct Use
- Research on **meta-cognition** and **adaptive reasoning** in LLMs.
- Creative simulations across domains (business strategy, adversarial contexts, scientific discussion).
- Conversational demos exploring identity, reflection, and scenario planning.
### Downstream Use
- Integration into **decision-support pipelines**.
- **Multi-agent experiments** with reflective/strategic agents.
### Out-of-Scope Use
- Benchmark-style factual QA.
- Critical applications (medical, legal, safety).
---
## Bias, Risks, and Limitations
- Optimized for **adaptation**, not factual accuracy.
- May generate speculative narratives by design.
- Not suitable for unsupervised high-stakes use.
### Recommendations
- Treat outputs as **reasoning simulations**.
- Always apply **human-in-the-loop** in sensitive contexts.
---
## How to Get Started
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base = "meta-llama/Llama-2-7b-chat-hf"
adapter = "Nexus-Walker/Reson"
tok = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, device_map="auto", load_in_4bit=True)
model = PeftModel.from_pretrained(model, adapter)
prompt = "Who are you?"
inputs = tok(prompt, return_tensors="pt").to("cuda")
out = model.generate(**inputs, max_new_tokens=150)
print(tok.decode(out[0], skip_special_tokens=True))
|
TimHo/SpaceInvadersNoFrameskip
|
TimHo
| 2025-09-18T22:17:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-18T22:16:32Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 641.00 +/- 266.56
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TimHo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TimHo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga TimHo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
devparagiri/Test-20250918-215607
|
devparagiri
| 2025-09-18T22:01:06Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:devparagiri/dataset-Test-20250918-215607",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T21:58:56Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: meta-llama/Llama-3.2-1B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- devparagiri/dataset-Test-20250918-215607
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
auto-space/distrostore
|
auto-space
| 2025-09-18T22:00:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-01-02T16:01:40Z |
---
title: Distrostore
emoji: 🏢
colorFrom: green
colorTo: blue
sdk: docker
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Aelalixoerels/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_scaly_gazelle
|
Aelalixoerels
| 2025-09-18T21:37:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mimic_scaly_gazelle",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T21:37:22Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mimic_scaly_gazelle
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
McClain/Evo2-plasmid-ft
|
McClain
| 2025-09-18T21:20:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-18T20:31:39Z |
# Evo2 SFT 1B (Torch)
PyTorch checkpoint distilled from the ZeRO-1 torch_dist shards in `/mnt/efs/projects/evo2-sft/jobs/evo2-20250918-131146/results/evo2/checkpoints`.
- Format: `torch.load('evo2_sft_1b_torch.pt')` returning module-level parameters (`module.decoder.*`).
- Optimizer state removed.
- Architecture: Hyena Evo2 1B (25 layers, hidden size 1920, context length 8192).
trained on ~77,000 plasmids from https://ccb-microbe.cs.uni-saarland.de/plsdb2025/browse
|
TAUR-dev/M-RC-ab_sft_our_structure_single_sample-sft
|
TAUR-dev
| 2025-09-18T21:20:07Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-18T21:19:37Z |
# M-RC-ab_sft_our_structure_single_sample-sft
This model was created as part of the **RC-ab_sft_our_structure_single_sample** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: RC-ab_sft_our_structure_single_sample
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_RC_ab_sft_our_structure_single_sample_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/RC_ab_sft_bon_all_samples/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__RC-ab_sft_our_structure_single_sample__v1", "sf_eval_before_training": false, "sf_wandb_project": "RC-ab_sft_our_structure_single_sample_sft", "sf_eval_steps": null, "run_name": "RC-ab_sft_our_structure_single_sample_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__RC-ab_sft_our_structure_single_sample__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-RC-ab_sft_our_structure_single_sample-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-RC-ab_sft_our_structure_single_sample-sft")
```
|
danchev/gemma-text-to-sql
|
danchev
| 2025-09-18T21:16:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T20:04:21Z |
---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-text-to-sql
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="danchev/gemma-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.danchev.net/danchev/huggingface/runs/gpmm6on8)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
samoline/55a72405-98d1-4b39-8264-bd0b3914be7b
|
samoline
| 2025-09-18T21:07:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2402.03300",
"base_model:Maykeye/TinyLLama-v0",
"base_model:finetune:Maykeye/TinyLLama-v0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T21:07:00Z |
---
base_model: Maykeye/TinyLLama-v0
library_name: transformers
model_name: root/.cache/huggingface/hub/trained_repo
tags:
- generated_from_trainer
licence: license
---
# Model Card for root/.cache/huggingface/hub/trained_repo
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
notouchfish/cs546-hw1
|
notouchfish
| 2025-09-18T21:01:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-18T21:01:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Premo-Test-Account/edge
|
Premo-Test-Account
| 2025-09-18T20:50:01Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-18T20:50:01Z |
---
license: other
license_name: huggingfacelicense
license_link: LICENSE
---
|
theprint/DevilsAdvocate-8B-GGUF
|
theprint
| 2025-09-18T20:28:59Z | 0 | 0 |
gguf
|
[
"gguf",
"quantized",
"llama.cpp",
"devilsadvocate-8b",
"text-generation",
"en",
"dataset:theprint/Advocate-9.4k",
"base_model:theprint/DevilsAdvocate-8B",
"base_model:quantized:theprint/DevilsAdvocate-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-18T20:10:01Z |
---
base_model:
- theprint/DevilsAdvocate-8B
library_name: gguf
pipeline_tag: text-generation
language: en
license: mit
tags:
- gguf
- quantized
- llama.cpp
- devilsadvocate-8b
model_type: llama
quantized_by: theprint
datasets:
- theprint/Advocate-9.4k
---
# DevilsAdvocate-8B - GGUF Quantized
Quantized GGUF versions of [DevilsAdvocate-8B](https://huggingface.co/theprint/DevilsAdvocate-8B) for use with llama.cpp and other GGUF-compatible inference engines.
## Original Model
- **Base model:** [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)
- **Fine-tuned model:** [theprint/DevilsAdvocate-8B](https://huggingface.co/theprint/DevilsAdvocate-8B)
- **Quantized by:** theprint
## Available Quantizations
- `DevilsAdvocate-8B-f16.gguf` (15628.9 MB) - 16-bit float (original precision, largest file)
- `DevilsAdvocate-8B-q3_k_m.gguf` (3933.1 MB) - 3-bit quantization (medium quality)
- `DevilsAdvocate-8B-q4_k_m.gguf` (4794.9 MB) - 4-bit quantization (medium, recommended for most use cases)
- `DevilsAdvocate-8B-q5_k_m.gguf` (5580.1 MB) - 5-bit quantization (medium, good quality)
- `DevilsAdvocate-8B-q6_k.gguf` (6414.3 MB) - 6-bit quantization (high quality)
- `DevilsAdvocate-8B-q8_0.gguf` (8306.0 MB) - 8-bit quantization (very high quality)
## Usage
### With llama.cpp
```bash
# Download recommended quantization
wget https://huggingface.co/theprint/DevilsAdvocate-8B-GGUF/resolve/main/DevilsAdvocate-8B-q4_k_m.gguf
# Run inference
./llama.cpp/main -m DevilsAdvocate-8B-q4_k_m.gguf \
-p "Your prompt here" \
-n 256 \
--temp 0.7 \
--top-p 0.9
```
### With other GGUF tools
These files are compatible with:
- [llama.cpp](https://github.com/ggerganov/llama.cpp)
- [Ollama](https://ollama.ai/) (import as custom model)
- [KoboldCpp](https://github.com/LostRuins/koboldcpp)
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
## Quantization Info
**Recommended:** `q4_k_m` provides the best balance of size, speed, and quality for most use cases.
**For maximum quality:** Use `q8_0` or `f16`
**For maximum speed/smallest size:** Use `q3_k_m` or `q4_k_s`
## License
mit
## Citation
```bibtex
@misc{devilsadvocate_8b_gguf,
title={DevilsAdvocate-8B GGUF Quantized Models},
author={theprint},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/theprint/DevilsAdvocate-8B-GGUF}
}
```
|
jeri96/MyGemmaNPC
|
jeri96
| 2025-09-18T20:28:49Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-05T21:01:25Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jeri96/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Shadow-Crystal-12B-GGUF
|
mradermacher
| 2025-09-18T20:23:31Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Vortex5/Shadow-Crystal-12B",
"base_model:quantized:Vortex5/Shadow-Crystal-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T06:53:22Z |
---
base_model: Vortex5/Shadow-Crystal-12B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Vortex5/Shadow-Crystal-12B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Shadow-Crystal-12B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF/resolve/main/Shadow-Crystal-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
haihp02/7ddd0a46-5820-4872-8102-d64661da9f64
|
haihp02
| 2025-09-18T20:16:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T20:16:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timm/vit_large_patch16_dinov3.lvd_1689m
|
timm
| 2025-09-18T20:14:31Z | 19 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-feature-extraction",
"transformers",
"dataset:lvd-1689m",
"arxiv:2508.10104",
"arxiv:2010.11929",
"license:other",
"region:us"
] |
image-feature-extraction
| 2025-09-17T16:36:50Z |
---
tags:
- image-feature-extraction
- timm
- transformers
pipeline_tag: image-feature-extraction
library_name: timm
license: other
license_name: dinov3-license
license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license
datasets:
- lvd-1689m
---
# Model card for vit_large_patch16_dinov3.lvd_1689m
A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model.
## Model Notes
* The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models.
* The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs.
## Model Details
- **Model Type:** Image Feature Encoder
- **Model Stats:**
- Params (M): 303.1
- GMACs: 82.4
- Activations (M): 90.6
- Image size: 256 x 256
- **Original:** https://github.com/facebookresearch/dinov3
- **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license)
- **Dataset:** LVD-1689M
- **Papers:**
- DINOv3: https://arxiv.org/abs/2508.10104
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_large_patch16_dinov3.lvd_1689m', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_patch16_dinov3.lvd_1689m',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 1024, 16, 16])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_patch16_dinov3.lvd_1689m',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 261, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
See the associated paper for details on the evaluation protocols
### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M)
| Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair |
|-------|---------|------|---------|-------|--------|------|-------|------|-------|
| **Global Tasks** | | | | | **Dense Tasks** | | | | |
| DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 |
| DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 |
| DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 |
| DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 |
| DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 |
| DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 |
### Results for ConvNeXt backbones distilled on web (LVD-1689M)
| Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ |
|-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------|
| **Global Tasks** | | | | | | | **Dense Tasks** | |
| DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 |
| DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 |
| DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 |
| DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 |
### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M)
#### (GEO-Bench) Classification
| Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean |
|-------|---------|--------------|-----------|-------------|----------|----------|------|
| DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 |
| DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 |
#### (GEO-Bench) Segmentation
| Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean |
|-------|----------|--------------|------------|-------------|--------------|-----------|------|
| DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 |
| DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 |
## Citation
```bibtex
@article{simeoni2025dinov3,
title={DINOv3},
author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others},
journal={arXiv preprint arXiv:2508.10104},
year={2025}
}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/vit_base_patch16_dinov3_qkvb.lvd_1689m
|
timm
| 2025-09-18T20:14:28Z | 32 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-feature-extraction",
"transformers",
"dataset:lvd-1689m",
"arxiv:2508.10104",
"arxiv:2010.11929",
"license:other",
"region:us"
] |
image-feature-extraction
| 2025-09-17T16:31:53Z |
---
tags:
- image-feature-extraction
- timm
- transformers
pipeline_tag: image-feature-extraction
library_name: timm
license: other
license_name: dinov3-license
license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license
datasets:
- lvd-1689m
---
# Model card for vit_base_patch16_dinov3_qkvb.lvd_1689m
A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model.
## Model Notes
* The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models.
* The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs.
## Model Details
- **Model Type:** Image Feature Encoder
- **Model Stats:**
- Params (M): 85.7
- GMACs: 23.6
- Activations (M): 34.1
- Image size: 256 x 256
- **Original:** https://github.com/facebookresearch/dinov3
- **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license)
- **Dataset:** LVD-1689M
- **Papers:**
- DINOv3: https://arxiv.org/abs/2508.10104
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_dinov3_qkvb.lvd_1689m', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_dinov3_qkvb.lvd_1689m',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 768, 16, 16])
# torch.Size([1, 768, 16, 16])
# torch.Size([1, 768, 16, 16])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_dinov3_qkvb.lvd_1689m',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 261, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
See the associated paper for details on the evaluation protocols
### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M)
| Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair |
|-------|---------|------|---------|-------|--------|------|-------|------|-------|
| **Global Tasks** | | | | | **Dense Tasks** | | | | |
| DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 |
| DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 |
| DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 |
| DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 |
| DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 |
| DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 |
### Results for ConvNeXt backbones distilled on web (LVD-1689M)
| Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ |
|-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------|
| **Global Tasks** | | | | | | | **Dense Tasks** | |
| DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 |
| DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 |
| DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 |
| DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 |
### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M)
#### (GEO-Bench) Classification
| Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean |
|-------|---------|--------------|-----------|-------------|----------|----------|------|
| DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 |
| DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 |
#### (GEO-Bench) Segmentation
| Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean |
|-------|----------|--------------|------------|-------------|--------------|-----------|------|
| DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 |
| DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 |
## Citation
```bibtex
@article{simeoni2025dinov3,
title={DINOv3},
author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others},
journal={arXiv preprint arXiv:2508.10104},
year={2025}
}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
0xwajal/Smoothie-Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_rabid_albatross
|
0xwajal
| 2025-09-18T20:12:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am wary_rabid_albatross",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T20:11:49Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am wary_rabid_albatross
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
te4bag/LoRA-llama-3.2-3B-gsm8k
|
te4bag
| 2025-09-18T20:04:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B",
"region:us"
] |
text-generation
| 2025-09-18T20:02:28Z |
---
base_model: meta-llama/Llama-3.2-3B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.2-3B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
frank1900s/my-model-v1
|
frank1900s
| 2025-09-18T20:04:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-18T19:52:35Z |
---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks dog
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - frank1900s/my-model-v1
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
aayasmin880/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_fanged_capybara
|
aayasmin880
| 2025-09-18T20:02:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am colorful fanged capybara",
"trl",
"genrl-swarm",
"I am colorful_fanged_capybara",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-05T08:19:44Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_fanged_capybara
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am colorful fanged capybara
- trl
- genrl-swarm
- I am colorful_fanged_capybara
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_fanged_capybara
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aayasmin880/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_fanged_capybara", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
muhendisonur/distilbert-gpt2-eli5-ft
|
muhendisonur
| 2025-09-18T19:50:53Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T11:49:27Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilbert-gpt2-eli5-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-gpt2-eli5-ft
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9206 | 1.0 | 1314 | 3.8281 |
| 3.8335 | 2.0 | 2628 | 3.8193 |
| 3.7869 | 3.0 | 3942 | 3.8171 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+xpu
- Datasets 4.1.1
- Tokenizers 0.22.0
|
contemmcm/f63dd32faac589f7e713a00fb8660590
|
contemmcm
| 2025-09-18T19:45:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-18T19:04:12Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: f63dd32faac589f7e713a00fb8660590
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# f63dd32faac589f7e713a00fb8660590
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the contemmcm/cls_mmlu dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4531
- Accuracy: 0.2852
- F1 Macro: 0.2741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 0 | 0 | 1.4092 | 0.2473 | 0.1884 |
| No log | 1 | 438 | 1.4026 | 0.2586 | 0.1452 |
| No log | 2 | 876 | 1.3945 | 0.2447 | 0.1928 |
| No log | 3 | 1314 | 1.4015 | 0.2533 | 0.1717 |
| No log | 4 | 1752 | 1.3964 | 0.2660 | 0.1816 |
| 0.0781 | 5 | 2190 | 1.3907 | 0.2520 | 0.1660 |
| 0.1872 | 6 | 2628 | 1.3867 | 0.2733 | 0.2680 |
| 1.39 | 7 | 3066 | 1.3846 | 0.2832 | 0.2707 |
| 1.3875 | 8.0 | 3504 | 1.3834 | 0.2879 | 0.2372 |
| 1.375 | 9.0 | 3942 | 1.3901 | 0.2972 | 0.2385 |
| 1.3225 | 10.0 | 4380 | 1.3999 | 0.2719 | 0.2465 |
| 1.2712 | 11.0 | 4818 | 1.4255 | 0.2939 | 0.2770 |
| 1.21 | 12.0 | 5256 | 1.4531 | 0.2852 | 0.2741 |
### Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.