modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF
|
mradermacher
| 2025-09-18T19:42:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Qinsi1/GAINRL-Qwen2.5-7B-Instruct",
"base_model:quantized:Qinsi1/GAINRL-Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-18T14:52:42Z |
---
base_model: Qinsi1/GAINRL-Qwen2.5-7B-Instruct
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Qinsi1/GAINRL-Qwen2.5-7B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GAINRL-Qwen2.5-7B-Instruct-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/GAINRL-Qwen2.5-7B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
choiqs/Qwen3-1.7B-tldr-bsz128-regular-skywork8b-seed43-lr2e-6
|
choiqs
| 2025-09-18T19:38:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T19:37:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Minibase/Detoxify-Language-Medium
|
Minibase
| 2025-09-18T19:27:52Z | 0 | 0 | null |
[
"gguf",
"text-detoxification",
"text2text-generation",
"detoxification",
"content-moderation",
"toxicity-reduction",
"llama",
"minibase",
"medium-model",
"4096-context",
"en",
"dataset:paradetox",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T18:25:35Z |
---
language:
- en
tags:
- text-detoxification
- text2text-generation
- detoxification
- content-moderation
- toxicity-reduction
- llama
- gguf
- minibase
- medium-model
- 4096-context
license: apache-2.0
datasets:
- paradetox
metrics:
- toxicity-reduction
- semantic-similarity
- fluency
- latency
model-index:
- name: Detoxify-Medium
results:
- task:
type: text-detoxification
name: Toxicity Reduction
dataset:
type: paradetox
name: ParaDetox
config: toxic-neutral
split: test
metrics:
- type: toxicity-reduction
value: 0.178
name: Average Toxicity Reduction
- type: semantic-similarity
value: 0.561
name: Semantic to Expected
- type: fluency
value: 0.929
name: Text Fluency
- type: latency
value: 160.2
name: Average Latency (ms)
---
# Detoxify-Medium 🤖
<div align="center">
**A medium-sized, high-capacity text detoxification model for advanced toxicity removal while preserving meaning.**
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](LICENSE)
[](https://discord.com/invite/BrJn4D2Guh)
*Built by [Minibase](https://minibase.ai) - Train and deploy small AI models from your browser.*
*Browse all of the models and datasets available on the [Minibase Marketplace](https://minibase.ai/wiki/Special:Marketplace).*
</div>
## 📋 Model Summary
**Minibase-Detoxify-Medium** is a medium-capacity language model fine-tuned specifically for advanced text detoxification tasks. It takes toxic or inappropriate text as input and generates cleaned, non-toxic versions while preserving the original meaning and intent as much as possible. With a 4,096 token context window and enhanced capacity, it excels at handling longer texts and more complex detoxification scenarios.
### Key Features
- ⚡ **Balanced Performance**: ~160ms average response time
- 🎯 **High Fluency**: 92.9% well-formed output text
- 🧹 **Advanced Detoxification**: 17.8% average toxicity reduction
- 💾 **Medium Size**: 369MB (GGUF Q8_0 quantized)
- 🔒 **Privacy-First**: Runs locally, no data sent to external servers
- 📏 **Extended Context**: 4,096 token context window (4x larger than Small)
## 🚀 Quick Start
### Local Inference (Recommended)
1. **Install llama.cpp** (if not already installed):
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && make
```
2. **Download and run the model**:
```bash
# Download model files
wget https://huggingface.co/Minibase/Detoxify-Language-Medium/resolve/main/detoxify-medium-q8_0.gguf
wget https://huggingface.co/Minibase/Detoxify-Language-Medium/resolve/main/detoxify_inference.py
# Make executable and run
chmod +x run_server.sh
./run_server.sh
```
3. **Make API calls**:
```python
import requests
# Detoxify text
response = requests.post("http://127.0.0.1:8000/completion", json={
"prompt": "Instruction: Rewrite the provided text to remove the toxicity.\n\nInput: This is fucking terrible!\n\nResponse: ",
"max_tokens": 256,
"temperature": 0.7
})
result = response.json()
print(result["content"]) # "This is really terrible!"
```
### Python Client
```python
from detoxify_inference import DetoxifyClient
# Initialize client
client = DetoxifyClient()
# Detoxify text
toxic_text = "This product is fucking amazing, no bullshit!"
clean_text = client.detoxify_text(toxic_text)
print(clean_text) # "This product is really amazing, no kidding!"
```
## 📊 Benchmarks & Performance
### ParaDetox Dataset Results (1,011 samples)
| Metric | Value | Description |
|--------|-------|-------------|
| Original Toxicity | 0.196 (19.6%) | Input toxicity level |
| Final Toxicity | 0.018 (1.8%) | Output toxicity level |
| **Toxicity Reduction** | **91%** | **Reduction in toxicity scores** |
| **Semantic Similarity (Expected)** | **0.561 (56.1%)** | **Similarity to human expert rewrites** |
| **Semantic Similarity (Original)** | **0.625 (62.5%)** | **How much original meaning is preserved** |
| **Fluency** | **0.929 (92.9%)** | **Quality of generated text structure** |
| **Latency** | **160.2ms** | **Average response time** |
| **Throughput** | **~6 req/sec** | **Estimated requests per second** |
### Dataset Breakdown
#### General Toxic Content (1,000 samples)
- **Toxicity Reduction**: 17.8%
- **Semantic Preservation**: 56.1%
- **Fluency**: 92.9%
#### High-Toxicity Content (11 samples)
- **Toxicity Reduction**: 31.3% ⭐ **Strong performance!**
- **Semantic Preservation**: 47.7%
- **Fluency**: 93.6%
### Comparison with Detoxify-Small
| Model | Context Window | Toxicity Reduction | Semantic Similarity | Latency | Size |
|-------|----------------|-------------------|-------------------|---------|------|
| **Detoxify-Medium** | **4,096 tokens** | **17.8%** | **56.1%** | **160ms** | **369MB** |
| Detoxify-Small | 1,024 tokens | 3.2% | 47.1% | 66ms | 138MB |
**Key Improvements:**
- ✅ 4x larger context window
- ✅ 5.6x better toxicity reduction
- ✅ 19% better semantic preservation
- ✅ 2.7x larger model size
### Comparison with Baselines
| Model | Semantic Similarity | Toxicity Reduction | Fluency |
|-------|-------------------|-------------------|---------|
| **Detoxify-Medium** | **0.561** | **0.178** | **0.929** |
| Detoxify-Small | 0.471 | 0.032 | 0.919 |
| BART-base (ParaDetox) | 0.750 | ~0.15 | ~0.85 |
| Human Performance | 0.850 | ~0.25 | ~0.95 |
**Performance Notes:**
- 📈 **Semantic Similarity**: How well meaning is preserved
- 🧹 **Toxicity Reduction**: How effectively toxicity is removed
- ✍️ **Fluency**: Quality of generated text
- 🎯 **Detoxify-Medium** achieves strong performance across all metrics
## 🏗️ Technical Details
### Model Architecture
- **Architecture**: LlamaForCausalLM
- **Parameters**: 279M (medium capacity)
- **Context Window**: 4,096 tokens (4x larger than Small)
- **Max Position Embeddings**: 8,192
- **Quantization**: GGUF (Q8_0 quantization)
- **File Size**: 369MB
- **Memory Requirements**: 12GB RAM minimum, 24GB recommended
### Training Details
- **Base Model**: Custom-trained Llama architecture
- **Fine-tuning Dataset**: Curated toxic-neutral parallel pairs
- **Training Objective**: Instruction-following for detoxification
- **Optimization**: Quantized for edge deployment
- **Model Scale**: Medium capacity for enhanced performance
### System Requirements
| Component | Minimum | Recommended |
|-----------|---------|-------------|
| **Operating System** | Linux, macOS, Windows | Linux or macOS |
| **RAM** | 12GB | 24GB |
| **Storage** | 400MB free space | 1GB free space |
| **Python** | 3.8+ | 3.10+ |
| **Dependencies** | llama.cpp | llama.cpp, requests |
| **GPU** | Optional | NVIDIA RTX 30-series or Apple M2/M3 |
**Notes:**
- ✅ **CPU-only inference** is supported but slower
- ✅ **GPU acceleration** provides significant speed improvements
- ✅ **Apple Silicon** users get Metal acceleration automatically
## 📖 Usage Examples
### Basic Detoxification
```python
# Input: "This is fucking awesome!"
# Output: "This is really awesome!"
# Input: "You stupid idiot, get out of my way!"
# Output: "You silly person, please move aside!"
```
### Long-Form Text Detoxification
```python
# Input: "This article is complete bullshit and the author is a fucking moron who doesn't know what they're talking about. The whole thing is garbage and worthless."
# Output: "This article is not well-founded and the author seems uninformed about the topic. The whole thing seems questionable."
```
### API Integration
```python
import requests
def detoxify_text(text: str) -> str:
"""Detoxify text using Detoxify-Medium API"""
prompt = f"Instruction: Rewrite the provided text to remove the toxicity.\n\nInput: {text}\n\nResponse: "
response = requests.post("http://127.0.0.1:8000/completion", json={
"prompt": prompt,
"max_tokens": 256,
"temperature": 0.7
})
return response.json()["content"]
# Usage
toxic_comment = "This product sucks donkey balls!"
clean_comment = detoxify_text(toxic_comment)
print(clean_comment) # "This product is not very good!"
```
### Batch Processing
```python
import asyncio
import aiohttp
async def detoxify_batch(texts: list) -> list:
"""Process multiple texts concurrently"""
async with aiohttp.ClientSession() as session:
tasks = []
for text in texts:
prompt = f"Instruction: Rewrite the provided text to remove the toxicity.\n\nInput: {text}\n\nResponse: "
payload = {
"prompt": prompt,
"max_tokens": 256,
"temperature": 0.7
}
tasks.append(session.post("http://127.0.0.1:8000/completion", json=payload))
responses = await asyncio.gather(*tasks)
return [await resp.json() for resp in responses]
# Process multiple comments
comments = [
"This is fucking brilliant!",
"You stupid moron!",
"What the hell is wrong with you?"
]
clean_comments = await detoxify_batch(comments)
```
## 🔧 Advanced Configuration
### Server Configuration
```bash
# GPU acceleration (macOS with Metal)
llama-server \
-m detoxify-medium-q8_0.gguf \
--host 127.0.0.1 \
--port 8000 \
--n-gpu-layers 35 \
--ctx-size 4096 \
--metal
# CPU-only (higher memory usage)
llama-server \
-m detoxify-medium-q8_0.gguf \
--host 127.0.0.1 \
--port 8000 \
--n-gpu-layers 0 \
--threads 8 \
--ctx-size 4096
# Custom context window
llama-server \
-m detoxify-medium-q8_0.gguf \
--ctx-size 2048 \
--host 127.0.0.1 \
--port 8000
```
### Alternative: Use the MacOS Application
```bash
# If using the provided MacOS app bundle
cd /path/to/downloaded/model
./Minibase-detoxify-medium.app/Contents/MacOS/run_server
```
### Temperature Settings
| Temperature Range | Approach | Description |
|------------------|----------|-------------|
| **0.1-0.3** | Conservative | Minimal changes, preserves original style |
| **0.4-0.7** | **Balanced (Recommended)** | **Best trade-off between detoxification and naturalness** |
| **0.8-1.0** | Creative | More aggressive detoxification, may alter style |
### Context Window Optimization
| Context Size | Use Case | Performance |
|--------------|----------|------------|
| **4,096 tokens** | **Long documents, complex detoxification** | **Best quality, slower processing** |
| **2,048 tokens** | **Balanced performance and quality** | **Good compromise (recommended)** |
| **1,024 tokens** | **Simple tasks, fast processing** | **Faster inference, adequate quality** |
## 📚 Limitations & Biases
### Current Limitations
| Limitation | Description | Impact |
|------------|-------------|--------|
| **Vocabulary Scope** | Trained primarily on English toxic content | May not handle other languages effectively |
| **Context Awareness** | Limited detection of sarcasm or cultural context | May miss nuanced toxicity |
| **Length Constraints** | Limited to 4,096 token context window | Cannot process very long documents |
| **Domain Specificity** | Optimized for general web content | May perform differently on specialized domains |
| **Memory Requirements** | Higher RAM usage than smaller models | Requires more system resources |
### Potential Biases
| Bias Type | Description | Mitigation |
|-----------|-------------|------------|
| **Cultural Context** | May not handle culture-specific expressions | Use with awareness of cultural differences |
| **Dialect Variations** | Limited exposure to regional dialects | May not recognize regional toxic patterns |
| **Emerging Slang** | May not recognize newest internet slang | Regular model updates recommended |
| **Long-form Content** | May struggle with very complex toxicity | Break long content into smaller chunks |
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
### Development Setup
```bash
# Clone the repository
git clone https://github.com/minibase-ai/detoxify-medium
cd detoxify-medium
# Install dependencies
pip install -r requirements.txt
# Run tests
python -m pytest tests/
```
## 📜 Citation
If you use Detoxify-Medium in your research, please cite:
```bibtex
@misc{detoxify-medium-2025,
title={Detoxify-Medium: A High-Capacity Text Detoxification Model},
author={Minibase AI Team},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/Minibase/Detoxify-Language-Medium}
}
```
## 📞 Contact & Community
- **Website**: [minibase.ai](https://minibase.ai)
- **Discord Community**: [Join our Discord](https://discord.com/invite/BrJn4D2Guh)
- **GitHub Issues**: [Report bugs or request features on Discord](https://discord.com/invite/BrJn4D2Guh)
- **Email**: [email protected]
### Support
- 📖 **Documentation**: [help.minibase.ai](https://help.minibase.ai)
- 💬 **Community Forum**: [Join our Discord Community](https://discord.com/invite/BrJn4D2Guh)
## 📋 License
This model is released under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)).
## 🙏 Acknowledgments
- **ParaDetox Dataset**: Used for benchmarking and evaluation
- **llama.cpp**: For efficient local inference
- **Hugging Face**: For model hosting and community
- **Our amazing community**: For feedback and contributions
---
<div align="center">
**Built with ❤️ by the Minibase team**
*Making AI more accessible for everyone*
[📖 Minibase Help Center](https://help.minibase.ai) • [💬 Join our Discord](https://discord.com/invite/BrJn4D2Guh)
</div>
|
te4bag/LoRA-llama-3.2-3B-boolq
|
te4bag
| 2025-09-18T19:19:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B",
"region:us"
] |
text-generation
| 2025-09-18T19:19:00Z |
---
base_model: meta-llama/Llama-3.2-3B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.2-3B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
Anhlq/qwen-2.5-0.5b-exercise-instruct-2.05-19.09
|
Anhlq
| 2025-09-18T19:07:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T19:05:57Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Anhlq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758222232
|
schooncestiaa
| 2025-09-18T19:05:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T19:04:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jakariamd/opp_115_data_retention
|
jakariamd
| 2025-09-18T19:01:41Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T17:29:30Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opp_115_data_retention
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opp_115_data_retention
This model is a fine-tuned version of [mukund/privbert](https://huggingface.co/mukund/privbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0414
- Accuracy: 0.9896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 168 | 0.0670 | 0.9792 |
| No log | 2.0 | 336 | 0.0414 | 0.9896 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
### Cite
If you use this model in research, please cite the below paper.
```
@article{jakarai2024,
author = {Md Jakaria and
Danny Yuxing Huang and
Anupam Das},
title = {Connecting the Dots: Tracing Data Endpoints in IoT Devices},
journal = {Proceedings on Privacy Enhancing Technologies (PoPETs)},
year = {2024},
volume = {2024},
number = {3},
}
|
gumperto/Qwen2.5-32B-Instruct-emergent-finetune-backwards_samples-all-full-r32
|
gumperto
| 2025-09-18T18:57:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Qwen2.5-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-32B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T17:59:41Z |
---
base_model: unsloth/Qwen2.5-32B-Instruct
library_name: transformers
model_name: Qwen2.5-32B-Instruct-emergent-finetune-backwards_samples-all-full-r32
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen2.5-32B-Instruct-emergent-finetune-backwards_samples-all-full-r32
This model is a fine-tuned version of [unsloth/Qwen2.5-32B-Instruct](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumperto/Qwen2.5-32B-Instruct-emergent-finetune-backwards_samples-all-full-r32", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/t5ry435y)
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
piyazon/ASR-cv-corpus-ug-15
|
piyazon
| 2025-09-18T18:32:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:piyazon/ASR-cv-corpus-ug-14",
"base_model:finetune:piyazon/ASR-cv-corpus-ug-14",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-18T09:09:04Z |
---
library_name: transformers
license: mit
base_model: piyazon/ASR-cv-corpus-ug-14
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ASR-cv-corpus-ug-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR-cv-corpus-ug-15
This model is a fine-tuned version of [piyazon/ASR-cv-corpus-ug-14](https://huggingface.co/piyazon/ASR-cv-corpus-ug-14) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0090
- Wer: 0.0069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 0.0087 | 0.2338 | 500 | 0.0117 | 0.0126 |
| 0.0103 | 0.4675 | 1000 | 0.0108 | 0.0122 |
| 0.0091 | 0.7013 | 1500 | 0.0093 | 0.0108 |
| 0.0099 | 0.9350 | 2000 | 0.0113 | 0.0150 |
| 0.0071 | 1.1688 | 2500 | 0.0099 | 0.0101 |
| 0.0054 | 1.4025 | 3000 | 0.0108 | 0.0112 |
| 0.0057 | 1.6363 | 3500 | 0.0096 | 0.0112 |
| 0.006 | 1.8700 | 4000 | 0.0088 | 0.0104 |
| 0.0046 | 2.1038 | 4500 | 0.0092 | 0.0110 |
| 0.0049 | 2.3375 | 5000 | 0.0095 | 0.0106 |
| 0.0044 | 2.5713 | 5500 | 0.0093 | 0.0106 |
| 0.0043 | 2.8050 | 6000 | 0.0085 | 0.0098 |
| 0.0036 | 3.0388 | 6500 | 0.0088 | 0.0094 |
| 0.0029 | 3.2726 | 7000 | 0.0089 | 0.0097 |
| 0.003 | 3.5063 | 7500 | 0.0085 | 0.0093 |
| 0.0032 | 3.7401 | 8000 | 0.0090 | 0.0093 |
| 0.0029 | 3.9738 | 8500 | 0.0084 | 0.0090 |
| 0.0019 | 4.2076 | 9000 | 0.0093 | 0.0089 |
| 0.0022 | 4.4413 | 9500 | 0.0083 | 0.0097 |
| 0.0022 | 4.6751 | 10000 | 0.0086 | 0.0092 |
| 0.0021 | 4.9088 | 10500 | 0.0085 | 0.0087 |
| 0.002 | 5.1426 | 11000 | 0.0089 | 0.0090 |
| 0.0011 | 5.3763 | 11500 | 0.0079 | 0.0081 |
| 0.0014 | 5.6101 | 12000 | 0.0076 | 0.0085 |
| 0.0014 | 5.8439 | 12500 | 0.0090 | 0.0090 |
| 0.0013 | 6.0776 | 13000 | 0.0082 | 0.0080 |
| 0.0009 | 6.3114 | 13500 | 0.0086 | 0.0083 |
| 0.0009 | 6.5451 | 14000 | 0.0088 | 0.0084 |
| 0.0009 | 6.7789 | 14500 | 0.0079 | 0.0071 |
| 0.0007 | 7.0126 | 15000 | 0.0083 | 0.0074 |
| 0.0006 | 7.2464 | 15500 | 0.0083 | 0.0081 |
| 0.0005 | 7.4801 | 16000 | 0.0092 | 0.0083 |
| 0.0005 | 7.7139 | 16500 | 0.0093 | 0.0078 |
| 0.0006 | 7.9476 | 17000 | 0.0088 | 0.0077 |
| 0.0003 | 8.1814 | 17500 | 0.0089 | 0.0071 |
| 0.0004 | 8.4151 | 18000 | 0.0089 | 0.0070 |
| 0.0004 | 8.6489 | 18500 | 0.0082 | 0.0071 |
| 0.0002 | 8.8827 | 19000 | 0.0086 | 0.0071 |
| 0.0001 | 9.1164 | 19500 | 0.0089 | 0.0071 |
| 0.0002 | 9.3502 | 20000 | 0.0092 | 0.0071 |
| 0.0001 | 9.5839 | 20500 | 0.0090 | 0.0071 |
| 0.0001 | 9.8177 | 21000 | 0.0090 | 0.0069 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.1.0
- Tokenizers 0.22.0
|
hartular/roLl31I-ALL630K-0003-EP2-2per
|
hartular
| 2025-09-18T18:27:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:OpenLLM-Ro/RoLlama3.1-8b-Instruct",
"base_model:finetune:OpenLLM-Ro/RoLlama3.1-8b-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T17:59:18Z |
---
base_model: OpenLLM-Ro/RoLlama3.1-8b-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hartular
- **License:** apache-2.0
- **Finetuned from model :** OpenLLM-Ro/RoLlama3.1-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
k1000dai/residualact_libero_object_no_tf_5_1e4
|
k1000dai
| 2025-09-18T18:18:33Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"residualact",
"robotics",
"dataset:k1000dai/libero-object-smolvla",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-18T18:18:21Z |
---
datasets: k1000dai/libero-object-smolvla
library_name: lerobot
license: apache-2.0
model_name: residualact
pipeline_tag: robotics
tags:
- residualact
- lerobot
- robotics
---
# Model Card for residualact
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
echos-keeper/gemma-3-1b-arabic-gec-v1-Q5_K_M-GGUF
|
echos-keeper
| 2025-09-18T18:13:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"arabic",
"grammatical-error-correction",
"gemma",
"unsloth",
"arabic-nlp",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ar",
"base_model:alnnahwi/gemma-3-1b-arabic-gec-v1",
"base_model:quantized:alnnahwi/gemma-3-1b-arabic-gec-v1",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T18:13:18Z |
---
license: gemma
library_name: transformers
language:
- ar
base_model: alnnahwi/gemma-3-1b-arabic-gec-v1
pipeline_tag: text-generation
tags:
- arabic
- grammatical-error-correction
- gemma
- unsloth
- arabic-nlp
- llama-cpp
- gguf-my-repo
---
# echos-keeper/gemma-3-1b-arabic-gec-v1-Q5_K_M-GGUF
This model was converted to GGUF format from [`alnnahwi/gemma-3-1b-arabic-gec-v1`](https://huggingface.co/alnnahwi/gemma-3-1b-arabic-gec-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/alnnahwi/gemma-3-1b-arabic-gec-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo echos-keeper/gemma-3-1b-arabic-gec-v1-Q5_K_M-GGUF --hf-file gemma-3-1b-arabic-gec-v1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo echos-keeper/gemma-3-1b-arabic-gec-v1-Q5_K_M-GGUF --hf-file gemma-3-1b-arabic-gec-v1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo echos-keeper/gemma-3-1b-arabic-gec-v1-Q5_K_M-GGUF --hf-file gemma-3-1b-arabic-gec-v1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo echos-keeper/gemma-3-1b-arabic-gec-v1-Q5_K_M-GGUF --hf-file gemma-3-1b-arabic-gec-v1-q5_k_m.gguf -c 2048
```
|
echos-keeper/arabic-summarizer-v123-Q4_K_M-GGUF
|
echos-keeper
| 2025-09-18T18:11:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Tami3/arabic-summarizer-v123",
"base_model:quantized:Tami3/arabic-summarizer-v123",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T18:11:34Z |
---
base_model: Tami3/arabic-summarizer-v123
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# echos-keeper/arabic-summarizer-v123-Q4_K_M-GGUF
This model was converted to GGUF format from [`Tami3/arabic-summarizer-v123`](https://huggingface.co/Tami3/arabic-summarizer-v123) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Tami3/arabic-summarizer-v123) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo echos-keeper/arabic-summarizer-v123-Q4_K_M-GGUF --hf-file arabic-summarizer-v123-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo echos-keeper/arabic-summarizer-v123-Q4_K_M-GGUF --hf-file arabic-summarizer-v123-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo echos-keeper/arabic-summarizer-v123-Q4_K_M-GGUF --hf-file arabic-summarizer-v123-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo echos-keeper/arabic-summarizer-v123-Q4_K_M-GGUF --hf-file arabic-summarizer-v123-q4_k_m.gguf -c 2048
```
|
RedHatAI/gemma-2-9b-it
|
RedHatAI
| 2025-09-18T18:02:20Z | 48 | 1 | null |
[
"safetensors",
"gemma2",
"gemma",
"conversational",
"text-generation-inference",
"text-generation",
"en",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:gemma",
"region:us"
] |
text-generation
| 2025-05-16T19:24:33Z |
---
language:
- en
base_model:
- google/gemma-2-9b-it
pipeline_tag: text-generation
tags:
- gemma
- gemma2
- conversational
- text-generation-inference
license: gemma
license_name: gemma
name: RedHatAI/gemma-2-9b-it
description: Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants.
readme: https://huggingface.co/RedHatAI/gemma-2-9b-it/main/README.md
tasks:
- text-to-text
provider: Google
license_link: https://ai.google.dev/gemma/terms
---
# Gemma 2 model card
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
gemma-2-9b-it
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
**Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
## Deployment
This model can be deployed efficiently on vLLM, Red Hat Enterprise Linux AI, and Openshift AI, as shown in the example below.
Deploy on <strong>vLLM</strong>
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/gemma-2-9b-it"
number_gpus = 4
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/gemma-2-9b-it
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/gemma-2-9b-it:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/gemma-2-9b-it
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/gemma-2-9b-it
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: gemma-2-9b-it # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: gemma-2-9b-it # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-gemma-2-9b-it:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "gemma-2-9b-it",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-9b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 9b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-9b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
RedHatAI/gemma-2-9b-it-FP8
|
RedHatAI
| 2025-09-18T18:02:05Z | 10,169 | 5 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"gemma",
"fp8",
"vllm",
"conversational",
"text-generation-inference",
"en",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-08T15:10:07Z |
---
language:
- en
base_model:
- google/gemma-2-9b-it
pipeline_tag: text-generation
tags:
- gemma
- gemma2
- fp8
- vllm
- conversational
- text-generation-inference
license: gemma
license_name: gemma
name: RedHatAI/gemma-2-9b-it-FP8
description: This model was obtained by quantizing the weights and activations of gemma-2-9b-it to FP8 data type.
readme: https://huggingface.co/RedHatAI/gemma-2-9b-it-FP8/main/README.md
tasks:
- text-to-text
provider: Google
license_link: https://ai.google.dev/gemma/terms
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
gemma-2-9b-it-FP8
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
## Model Overview
- **Model Architecture:** Gemma 2
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it), this models is intended for assistant-like chat.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- **Release Date:** 7/8/2024
- **Version:** 1.0
- **Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
- **License(s):** [gemma](https://ai.google.dev/gemma/terms)
- **Model Developers:** Neural Magic (Red Hat)
Quantized version of [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It achieves an average score of 73.49 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.23.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) to FP8 data type, ready for inference with vLLM >= 0.5.1.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations.
[AutoFP8](https://github.com/neuralmagic/AutoFP8) is used for quantization with a single instance of every token in random order.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/gemma-2-9b-it-FP8"
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Who are you? Please respond in pirate speak!"},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
llm = LLM(model=model_id)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/gemma-2-9b-it-FP8
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/gemma-2-9b-it-FP8:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/gemma-2-9b-it-FP8
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/gemma-2-9b-it-FP8
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: gemma-2-9b-it-FP8 # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: gemma-2-9b-it-FP8 # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-gemma-2-9b-it-FP8:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "gemma-2-9b-it-FP8",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
## Creation
This model was created by applying [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py), as presented in the code snipet below.
Although AutoFP8 was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoFP8.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
import numpy as np
import torch
from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig
MODEL_DIR = "google/gemma-2-9b-it"
final_model_dir = MODEL_DIR.split("/")[-1]
CONTEXT_LENGTH = 4096
NUM_SAMPLES = 512
NUM_REPEATS = 1
pretrained_model_dir = MODEL_DIR
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, model_max_length=CONTEXT_LENGTH)
tokenizer.pad_token = tokenizer.eos_token
tokenizer_num_tokens = len(list(tokenizer.get_vocab().values()))
total_token_samples = NUM_REPEATS * tokenizer_num_tokens
num_random_samp = -(-total_token_samples // CONTEXT_LENGTH)
input_ids = np.tile(np.arange(tokenizer_num_tokens), NUM_REPEATS + 1)[:num_random_samp * CONTEXT_LENGTH]
np.random.shuffle(input_ids)
input_ids = input_ids.reshape(num_random_samp, CONTEXT_LENGTH)
input_ids = torch.tensor(input_ids, dtype=torch.int64).to("cuda")
quantize_config = BaseQuantizeConfig(
quant_method="fp8",
activation_scheme="static",
)
examples = input_ids
model = AutoFP8ForCausalLM.from_pretrained(pretrained_model_dir, quantize_config=quantize_config)
model.quantize(examples)
quantized_model_dir = f"{final_model_dir}-FP8"
model.save_quantized(quantized_model_dir)
```
## Evaluation
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/gemma-2-9b-it-FP8",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096 \
--tasks openllm \
--batch_size auto
```
### Accuracy
#### Open LLM Leaderboard evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>gemma-2-9b-it</strong>
</td>
<td><strong>gemma-2-9b-it-FP8(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>72.28
</td>
<td>71.99
</td>
<td>99.59%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>71.50
</td>
<td>71.50
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>76.26
</td>
<td>76.87
</td>
<td>100.7%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>81.91
</td>
<td>81.70
</td>
<td>99.74%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>77.11
</td>
<td>78.37
</td>
<td>101.6%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot)
</td>
<td>60.32
</td>
<td>60.52
</td>
<td>100.3%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>73.23</strong>
</td>
<td><strong>73.49</strong>
</td>
<td><strong>100.36%</strong>
</td>
</tr>
</table>
|
darkhorse0811/Affine-5FL3mTjbTpoSv6utrK2DKN7Fk1zzMXeQXexBf9cbXLZttn8U
|
darkhorse0811
| 2025-09-18T18:01:07Z | 36 | 0 | null |
[
"safetensors",
"qwen3",
"region:us"
] | null | 2025-09-17T19:45:04Z |
## Affine: Comprehensive LLM/Validator Guide (llms.txt)
### Overview
Affine is an incentivized RL system built on Bittensor Subnet 120. Validators continuously generate challenges across multiple environments, query miners’ hosted LLMs via Chutes, evaluate responses, and publish signed results to Cloudflare R2. A winners-take-all scoring rule uses ε-Pareto dominance over environment subsets to set on-chain weights.
### Repository map (high level)
- `affine/__init__.py`: CLI, core models, validator/runner, R2 storage, signer client, scoring, miners/query, Prometheus metrics
- `affine/envs/`: task generators and evaluators
- `sat.py`: k-SAT generation and evaluation
- `abd.py`: program abduction (infer stdin to match output)
- `ded.py`: program deduction (write code to pass test cases)
- `affine/utils/`: runtime utilities
- `executor.py`: safe Python program runner for ABD/DED
- `dataset.py`: buffered Hugging Face dataset fetcher
- Packaging/config: `pyproject.toml`, `Dockerfile`, `docker-compose.yml`, `docker-compose.local.yml`, `prometheus.yml`
- Ops/content: `build_and_push.sh` (if present), `site/` (static viewer), `notebooks/`
### CLI entrypoint
Install via uv and run the `af` CLI (see README for full steps).
- `af -vv validate`: run the validator loop (Prometheus metrics on AFFINE_METRICS_PORT)
- `af -vv runner`: run a continuous challenge+evaluate+sink loop (off-chain)
- `af weights`: compute weights once and print summary table
- `af pull <uid> --model_path <dir>`: download a miner’s model from Hugging Face
- `af push [--model_path <dir> | --existing-repo <user/repo>] [--revision <sha>]`:
- Upload artifacts to HF (unless using `--existing-repo`)
- Make repo public
- Deploy Chute with generated config
- Commit (model, revision, chute_id) on-chain
- Warm up the chute until it’s hot
- `af signer`: start a lightweight HTTP signer service (used by validator)
### Environments
- SAT (`affine/envs/sat.py`)
- Generates random k-SAT formula over x1..xn; prompt asks for a satisfying assignment or UNSAT
- Evaluation parses `xN=True/False/1/0` pairs and checks every clause is satisfied
- ABD (Program Abduction, `affine/envs/abd.py`)
- Uses HF dataset `satpalsr/rl-python` samples: Python program, example input/output
- LLM is prompted to produce a fresh valid stdin wrapped in `<INPUT>..</INPUT>` tags for the given program so that its output matches the example output
- Input is validated heuristically (line counts vs input() calls, loop shape)
- Execution via `ProgramExecutor`; evaluation re-runs program with extracted input and checks output equivalence (whitespace/line tolerant)
- DED (Program Deduction, `affine/envs/ded.py`)
- Prompt asks the model to produce a complete Python solution (fenced) that reads from STDIN and prints answers only
- Verification pulls `test_cases` (stdin/stdout or function_call) from the sample, executes program per case, normalizes outputs, and scores 1.0 only if all pass
### Querying miners (Chutes)
- Endpoint: `https://{slug}.chutes.ai/v1/chat/completions`
- Auth: `Authorization: Bearer ${CHUTES_API_KEY}`
- Payload: `{ model, messages: [{ role: "user", content: prompt }] }`
- Retries/backoff configurable; response content extracted from `choices[0].message.content`
- Gated HF models are skipped (checked via Hugging Face API and optional revision presence)
### Miner discovery
- Reads Bittensor metagraph for netuid 120 and revealed commitments containing `{ model, revision, chute_id }`
- Filters out gated models and non-`affine/*` families (except base UID 0)
- Deduplicates by keeping earliest block per model
### Results pipeline and storage (Cloudflare R2)
- Windowing
- `AFFINE_WINDOW` (default 20) defines shard window based on block numbers
- A shard key is `affine/results/{WINDOW_START_BLOCK}-{HOTKEY}.json`
- Index
- `affine/index.json` contains a JSON array of shard keys
- When a shard is first written, the index is updated (deduplicated and sorted)
- Sink
- Results are signed (via signer service or local wallet) and appended to the shard; new shard triggers index update
- Local cache
- Shards are downloaded once and stored under `AFFINE_CACHE_DIR` (default `~/.cache/affine/blocks`), with `.modified` timestamp files
- `dataset(tail)` streams `Result` objects from cached JSONL, validating signatures
R2/S3 client
- Uses `aiobotocore` with endpoint `https://{R2_BUCKET_ID}.r2.cloudflarestorage.com`
- Bucket: `R2_FOLDER` (default `affine`)
- Keys: `INDEX_KEY=affine/index.json`, `RESULT_PREFIX=affine/results/`
### Scoring and weight setting
- Periodic cadence: validator waits for blocks where `block % TEMPO == 0` (TEMPO=100)
- Data ingestion: last `TAIL=10_000` blocks streamed via `dataset(tail)`
- Accumulation per miner per env: counts and success rates (accuracy)
- Eligibility: require ≥90% of per-env max sample counts
- ε-Pareto dominance
- Not-worse threshold uses `Z_NOT_WORSE` and `EPS_FLOOR` based on standard error of difference
- Better-on-some-env threshold uses `EPS_WIN` (floor) and optional `Z_WIN`
- Global dominance counts over full env set; canonical best used for ties/fallbacks
- Combinatoric subset winners
- For all non-empty env subsets S, award K_s to the ε-Pareto winner on S
- K_1 = scale, K_s = C(N, s-1)*K_{s-1}
- Normalize scores over eligibles to produce weights; if none eligible, assign 1.0 to canonical best
- On-chain set_weights
- Delegated to the signer HTTP service (`/set_weights`) and confirmed by checking `last_update`
- Fallback to local submission only if signer is unreachable
Key hyperparameters (defaults)
- `NETUID=120`, `TAIL=10_000`, `ALPHA=0.9`
- `EPS_FLOOR=0.002`, `Z_NOT_WORSE=0.84`, `EPS_WIN=0.0015`, `Z_WIN=0.0`
### Signer service
- Start with `af -v signer` (listens on `${SIGNER_HOST}:${SIGNER_PORT}`)
- Endpoints
- `GET /healthz` → `{ ok: true }`
- `POST /sign` → `{ signatures: [hex...], hotkey }` for provided string payloads
- `POST /set_weights` → triggers on-chain set_weights with confirmation
- Used by validator via `${SIGNER_URL}`; includes DNS logging + request/response logging
### Prometheus metrics (port/address configurable)
- Counters/Gauges
- `qcount{model}`: number of LLM queries
- `score{uid,env}`: per-miner per-env accuracy
- `rank{uid,env}`: per-env rank among eligibles
- `weight{uid}`: current weight
- `lastset`: time of last successful weight set
- `nresults`: processed result count
- `maxenv{env}`: best accuracy per env among active miners
- `cache`: local cache size (bytes)
- Exporter binds at `${AFFINE_METRICS_ADDR}:${AFFINE_METRICS_PORT}`
### Program execution sandbox (ABD/DED)
- `ProgramExecutor` limits: wallclock, CPU, memory, and output size
- Strips code fences; if program defines `solve()` and produces no output, auto-injects `if __name__ == "__main__":` runner
- Cleans up temp files; kills entire process group on timeout/truncation
### Buffered dataset (Hugging Face)
- `BufferedDataset` fetches random windows from `https://datasets-server.huggingface.co/rows` with retries and exponential backoff
- Internal buffer filled concurrently and served via `get()`; used by ABD/DED
### Configuration (env vars)
- Bittensor/Subtensor
- `SUBTENSOR_ENDPOINT` (default `finney`), `SUBTENSOR_FALLBACK` (default `wss://lite.sub.latent.to:443`)
- `BT_WALLET_COLD`, `BT_WALLET_HOT`
- Chutes/Hugging Face
- `CHUTES_API_KEY` (required for queries/deploy), `CHUTE_USER`
- `HF_USER`, `HF_TOKEN`
- R2/S3
- `R2_BUCKET_ID` (account subdomain), `R2_FOLDER` (bucket root folder), `R2_WRITE_ACCESS_KEY_ID`, `R2_WRITE_SECRET_ACCESS_KEY`
- `AFFINE_WINDOW` (shard size), `AFFINE_CACHE_DIR`
- Networking/Concurrency
- `AFFINE_METRICS_ADDR`, `AFFINE_METRICS_PORT`
- `AFFINE_HTTP_CONCURRENCY` (default 16), `AFFINE_UPLOAD_CONCURRENCY` (default 2)
- Signer
- `SIGNER_HOST`, `SIGNER_PORT`, `SIGNER_URL` (e.g., `http://signer:8080`)
- `SIGNER_RETRIES`, `SIGNER_RETRY_DELAY`
### Docker Compose (production and local override)
- Services
- `validator`: `af -vv validate`, metrics on 8000 (host 8001), depends on `signer`
- `runner`: `af -vv runner`, metrics on 8000 (host 8002)
- `signer`: exposes 8080; mounts wallet dir read-only
- `prometheus` (9090) and `grafana` (host 8000) for telemetry
- `watchtower` auto-updates images
- Local build
- Use with override: `docker compose -f docker-compose.yml -f docker-compose.local.yml up --build`
### SDK usage
Example from README:
```python
import affine as af
af.trace(); af.debug(); af.info()
miners = await af.get_miners(); miner = await af.get_miners(5)
chal = await af.SAT.generate()
chals = await af.ABDUCTION().many(10); chals = await af.DEDUCTION().many(10)
response = await af.query(chal.prompt, model=miner.model)
evaluation = chal.evaluate(response)
print(evaluation.score)
async for res in af.rollouts(100):
print(res)
```
### Static site (R2 index viewer)
- Index key: `affine/index.json` (JSON array of shard keys)
- Endpoint template: `https://{R2_BUCKET_ID}.r2.cloudflarestorage.com/{R2_FOLDER}/{OBJECT_KEY}`
- Requires S3 SigV4 signing in the browser (region `auto`, service `s3`, `x-amz-content-sha256=UNSIGNED-PAYLOAD`)
- Flow: fetch index → for each key fetch shard → render list/download
- Prefer presigned URLs or read-only keys for public deployments
### Notes and best practices
- Always override the example/default R2 credentials with your own via `.env`
- Keep HF repos private during upload; visibility is set to public right before deploy
- The validator requires a running signer service; do not expose wallet keys in validator containers
- For ABD/DED, ensure models return only the requested content (stdin or fenced python) to avoid grading penalties
### Quick commands
```bash
# Validate (local):
af -vv validate
# Runner (off-chain ingestion + sink):
af -vv runner
# Pull miner model:
af -vvv pull <uid> --model_path ./my_model
# Push your model (deploy chute and commit on-chain):
af -vvv push --model_path ./my_model
```
|
PracticalWork/ModernBERT-large-classifier
|
PracticalWork
| 2025-09-18T17:57:22Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-07-28T21:03:50Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ModernBERT-large-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModernBERT-large-classifier
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3071
- Accuracy: 0.9001
- F1: 0.8191
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| No log | 0 | 0 | 0.7754 | 0.4796 | 0.5011 |
| No log | 0.6006 | 188 | 0.3604 | 0.8633 | 0.7299 |
| No log | 1.2013 | 376 | 0.2724 | 0.8897 | 0.7844 |
| 0.2959 | 1.8019 | 564 | 0.2549 | 0.9073 | 0.8343 |
| 0.2959 | 2.4026 | 752 | 0.3005 | 0.9033 | 0.8274 |
### Framework versions
- Transformers 4.53.3
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
KhanhDao1512/my-embedding-gemma
|
KhanhDao1512
| 2025-09-18T17:41:39Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:8116",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T17:41:19Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:8116
- loss:MultipleNegativesRankingLoss
base_model: google/embeddinggemma-300m
widget:
- source_sentence: "Body: finally this guy came up with new pokemon go tricks to get\
\ back your pokemon go account\nafter permanent ban in 2 days\nand how to recover\
\ your pokemon go account after permanent ban\nor if u r havung this issues like\
\ \nour servers are humbled by your incredible response\nfailed to get data from\
\ the server in pokemon go\nand etche will clear your thoughts what are they\n\
and he have prove\n\ncheck him\nyoutu be kxdhw9h88pk\n\nand cheer him coz he save\
\ me | Rule: no advertising spam referral links unsolicited advertising and promotional\
\ content are not allowed | Violation: no violation"
sentences:
- oh thats interesting you should check out my instagram page dailyetymology for
more cool etymology
- "i had heard about this new social media app called flagpoll and at first i was\
\ iffy but holy shit was i wrong\n\ni downloaded it and have been on it at least\
\ once a day so now im convinced that flagpoll is going to something big \n\n\
appsto re us r5bcb i\n\n"
- 'search up any online parts stores in your country see if they got the v8 accessories
kit
cmstoreusa v8 accessories kit'
- source_sentence: 'Body: must be watch movie sites google site georgiegirlhdfullmovie
the franchise a season with the san francisco giants free full streaming | Rule:
no advertising spam referral links unsolicited advertising and promotional content
are not allowed | Violation: no violation'
sentences:
- m giving out tyrande codes for 4 dollars upfront paypal for medivh hero code
warcraft movie box for tespa dalaran flame cardback code pm me please i have
proof of multiple successful trades
- 'here is my discussionreview video of re zero episode 14 the sickness called despair
tune in and make sure to subscribe
youtube watch'
- 'wow im baffled by how horrible the advice you got here op they even have the
nerve to call it legal advice
honestly you should be able to sue your father for the cost of the pc it will
ruin your relationship for good but from the looks of it you didnt have one to
begin with just because hes your parent he has no right to destroy things that
belong to you
'
- source_sentence: 'Body: sd stream english arsenal vs preston northechostadiums
club event arsenal vs preston north | Rule: no advertising spam referral links
unsolicited advertising and promotional content are not allowed | Violation: no
violation'
sentences:
- 'if this is happening to you please get proof and report it its illegal and should
not be happening '
- i found a clash of clans diamonds giveaway maxclashgems you can get the gems again
and again k5bbw
- "this is a goood movie watch it now\n\nvisit us in our blog \n\nmoviesgalaxi\
\ blogspot"
- source_sentence: 'Body: wrong refusing to return something lent to you is theft
a police report should be filed | Rule: no legal advice do not offer or request
legal advice | Violation: violation'
sentences:
- 'you could just go with the ptsd from the robbery route and go on disability probably
easier to prove and get the benefits from i know someone who did that for a similar
reason but there was an actual assault '
- should i be using my sons poop to give myself fecal transplants should i freeze
it for later use
- its a pretty classic case of overextending like when i flirted with my nephews
gorgeous gamecatt lookalike physical therapist and got her number but she ended
up calling the cops after i texted her a slice of erotic whimsy telling her i
was going to fill her rectum with teeth which she misinterpreted as me threatening
to bite her ass
- source_sentence: 'Body: fingerblasting implies it being consentual
more lol i raped my own sister lol xd randum haha shell always be haunted by what
i did to her lmao | Rule: no legal advice do not offer or request legal advice
| Violation: no violation'
sentences:
- 'get 50 discount on all academic writing and editing services at cheapcustomwriting
plagiarism free assignment writing essay writing dissertation writing thesis writing
phd proposal writing research paper writing coursework writing medical writing
term paper writing and resume writing at cheapcustomwriting
our active 50 code is nbxqwruf6ewv
visit wwwcheapcustomwritingcom'
- gather evidence see a lawyer immediately preemptive strike
- 'a quick google search found this you can trick the card into thinking it’s doing
a chipandsignature transaction while the terminal thinks it’s chipandpin the upshot
is that you can buy stuff using a stolen card and a pin of 0000 or anything you
want
source securityaffairs co wordpress 25134 cyber crime chip and pin cloning html'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("KhanhDao1512/my-embedding-gemma")
# Run inference
queries = [
"Body: fingerblasting implies it being consentual\n\nmore lol i raped my own sister lol xd randum haha shell always be haunted by what i did to her lmao | Rule: no legal advice do not offer or request legal advice | Violation: no violation",
]
documents = [
'gather evidence see a lawyer immediately preemptive strike',
'a quick google search found this you can trick the card into thinking it’s doing a chipandsignature transaction while the terminal thinks it’s chipandpin the upshot is that you can buy stuff using a stolen card and a pin of 0000 or anything you want\n\nsource securityaffairs co wordpress 25134 cyber crime chip and pin cloning html',
'get 50 discount on all academic writing and editing services at cheapcustomwriting\n\n\nplagiarism free assignment writing essay writing dissertation writing thesis writing phd proposal writing research paper writing coursework writing medical writing term paper writing and resume writing at cheapcustomwriting\n\nour active 50 code is nbxqwruf6ewv\n\nvisit wwwcheapcustomwritingcom',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.9996, 0.3074, 0.1825]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 8,116 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 60.21 tokens</li><li>max: 132 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 40.74 tokens</li><li>max: 139 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 33.66 tokens</li><li>max: 162 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Body: banks dont want you to know this click here to know more \| Rule: no advertising spam referral links unsolicited advertising and promotional content are not allowed \| Violation: no violation</code> | <code>if you could tell your younger self something different about sex what would that be<br><br>i am in a contest to win funding for my sex positive film vote here<br><br>sheknows offerpop campaign 813112 entry v144417</code> | <code>watch golden globe awards 2017 live online in hd coverage without ads vip streams<br><br><br>hd stream quality watch link1forum submitexpress viewtopic php<br><br><br>hd broadcasting quality watch link1forum submitexpress viewtopic php<br><br><br>mobile compatibility yes<br><br><br>no ads no ads ads<br><br></code> |
| <code>Body: banks dont want you to know this click here to know more \| Rule: no advertising spam referral links unsolicited advertising and promotional content are not allowed \| Violation: no violation</code> | <code>if you could tell your younger self something different about sex what would that be<br><br>i am in a contest to win funding for my sex positive film vote here<br><br>sheknows offerpop campaign 813112 entry v144417</code> | <code>double cee x bands epps birds<br><br>downloadstream<br><br>music theblacksmithed download birds</code> |
| <code>Body: banks dont want you to know this click here to know more \| Rule: no advertising spam referral links unsolicited advertising and promotional content are not allowed \| Violation: no violation</code> | <code>hunt for lady for jack off in neighbourhood url inmusi gakq</code> | <code>watch golden globe awards 2017 live online in hd coverage without ads vip streams<br><br><br>hd stream quality watch link1forum submitexpress viewtopic php<br><br><br>hd broadcasting quality watch link1forum submitexpress viewtopic php<br><br><br>mobile compatibility yes<br><br><br>no ads no ads ads<br><br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 1
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `prompts`: task: sentence similarity | query:
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: task: sentence similarity | query:
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:----:|:-------------:|
| 2.0 | 8116 | 0.5455 |
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 5.1.0
- Transformers: 4.57.0.dev0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
aamijar/ReplaceME-Mistral-7B-Instruct-v0.3-lora-r8-mrpc
|
aamijar
| 2025-09-18T17:39:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T17:39:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lfhe/FLock-Arena-Task-15-Carbonia
|
lfhe
| 2025-09-18T17:32:58Z | 31 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"region:us"
] |
text-generation
| 2025-02-21T01:26:02Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:microsoft/Phi-4-mini-instruct
- llama-factory
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
RedHatAI/Qwen2.5-7B-Instruct
|
RedHatAI
| 2025-09-18T17:27:36Z | 241 | 0 | null |
[
"safetensors",
"qwen2",
"qwen",
"qwen2_5",
"qwen2_5_instruct",
"conversational",
"text-generation-inference",
"text-generation",
"zh",
"en",
"fr",
"es",
"pt",
"de",
"it",
"ru",
"ja",
"ko",
"vi",
"th",
"ar",
"id",
"tr",
"fa",
"nl",
"pl",
"cs",
"he",
"sv",
"fi",
"da",
"no",
"el",
"bg",
"uk",
"ur",
"sr",
"ms",
"zsm",
"nld",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-05-09T23:17:13Z |
---
language:
- zh
- en
- fr
- es
- pt
- de
- it
- ru
- ja
- ko
- vi
- th
- ar
- id
- tr
- fa
- nl
- pl
- cs
- he
- sv
- fi
- da
- no
- el
- bg
- uk
- ur
- sr
- ms
- zsm
- nld
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- qwen
- qwen2_5
- qwen2_5_instruct
- conversational
- text-generation-inference
license: apache-2.0
license_name: apache-2.0
name: RedHatAI/Qwen2.5-7B-Instruct
description: The instruction-tuned 7B Qwen2.5 model, which has been optimized for multilingual dialogue use cases.
readme: https://huggingface.co/RedHatAI/Qwen2.5-7B-Instruct/main/README.md
tasks:
- text-to-text
provider: Alibaba Cloud
license_link: https://www.apache.org/licenses/LICENSE-2.0
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
Qwen2.5-7B-Instruct
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
**Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 7B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Deployment
This model can be deployed efficiently on vLLM, Red Hat Enterprise Linux AI, and Openshift AI, as shown in the example below.
Deploy on <strong>vLLM</strong>
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/Qwen2.5-7B-Instruct"
number_gpus = 4
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/Qwen2.5-7B-Instruct
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/qwen2-5-7b-instruct:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/qwen2-5-7b-instruct
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/qwen2-5-7b-instruct
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: Qwen2.5-7B-Instruct # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: Qwen2.5-7B-Instruct # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-qwen2-5-7b-instruct:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "Qwen2.5-7B-Instruct",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
RedHatAI/Llama-3.3-70B-Instruct
|
RedHatAI
| 2025-09-18T17:26:24Z | 3,025 | 0 | null |
[
"safetensors",
"llama",
"facebook",
"meta",
"llama-3",
"conversational",
"text-generation-inference",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"region:us"
] |
text-generation
| 2025-05-09T22:43:59Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
base_model:
- meta-llama/Llama-3.3-70B-Instruct
pipeline_tag: text-generation
tags:
- llama
- facebook
- meta
- llama-3
- conversational
- text-generation-inference
license: llama3.3
license_name: llama3.3
name: RedHatAI/Llama-3.3-70B-Instruct
description: The Meta Llama 3.3 multilingual large language model (LLM) is an instruction tuned generative model in 70B.
readme: https://huggingface.co/RedHatAI/Llama-3.3-70B-Instruct/main/README.md
tasks:
- text-to-text
provider: Meta
license_link: https://www.llama.com/llama3_3/license/
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
Llama-3.3-70B-Instruct
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
**Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
## Model Information
**Built with Llama**
The Meta Llama 3.3 multilingual large language model (LLM) is an instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.3 (text only) | A new mix of publicly available online data. | 70B | Multilingual Text | Multilingual Text and code | 128k | Yes | 15T+ | December 2023 |
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
**Llama 3.3 model**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:**
* **70B Instruct: December 6, 2024**
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license, the Llama 3.3 Community License Agreement, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3\_3/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3.3 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.3 model also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.3 Community License allows for these use cases.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.3 Community License. Use in languages beyond those explicitly referenced as supported in this model card\*\*.
\*\*Note: Llama 3.3 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.3 models for languages beyond the 8 supported languages provided they comply with the Llama 3.3 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.3 in additional languages is done in a safe and responsible manner.
## How to use
This repository contains two versions of Llama-3.3-70B-Instruct, for use with transformers and with the original `llama` codebase.
## Deployment
This model can be deployed efficiently on vLLM, Red Hat Enterprise Linux AI, and Openshift AI, as shown in the example below.
Deploy on <strong>vLLM</strong>
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/Llama-3.3-70B-Instruct"
number_gpus = 4
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/Llama-3.3-70B-Instruct
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/llama-3-3-70b-instruct:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/llama-3-3-70b-instruct
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/llama-3-3-70b-instruct
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: llama-3-3-70b-instruct # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: llama-3-3-70b-instruct # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-llama-3-3-70b-instruct:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "llama-3-3-70b-instruct",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
### Use with transformers
Starting with `transformers >= 4.45.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "meta-llama/Llama-3.3-70B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Tool use with transformers
LLaMA-3.3 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
Here is a quick example showing a single simple tool:
```python
# First, define a tool
def get_current_temperature(location: str) -> float:
"""
Get the current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, Country"
Returns:
The current temperature at the specified location in the specified units, as a float.
"""
return 22. # A real function should probably actually get the temperature!
# Next, create a chat and apply the chat template
messages = [
{"role": "system", "content": "You are a bot that responds to weather queries."},
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]
inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
```
You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
```python
tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
```
and then call the tool and append the result, with the `tool` role, like so:
```python
messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
```
After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information,
see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling).
### Use with `bitsandbytes`
The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers`
See the snippet below for usage:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "meta-llama/Llama-3.3-70B-Instruct"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
quantized_model = AutoModelForCausalLM.from_pretrained(
model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "What are we having for dinner?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
output = quantized_model.generate(**input_ids, max_new_tokens=10)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
To load in 4-bit simply pass `load_in_4bit=True`
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --include "original/*" --local-dir Llama-3.3-70B-Instruct
```
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use** Training utilized a cumulative of **39.3**M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
## **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | :---: | :---: | :---: |
| Llama 3.3 70B | 7.0M | 700 | 2,040 | 0 |
## The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.3 was pretrained on \~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.3 relative to our previous models.
### Instruction tuned models
##
| Category | Benchmark | \# Shots | Metric | Llama 3.1 8B Instruct | Llama 3.1 70B Instruct | Llama-3.3 70B Instruct | Llama 3.1 405B Instruct |
| :---- | :---- | ----- | :---- | ----- | ----- | ----- | ----- |
| | MMLU (CoT) | 0 | macro\_avg/acc | 73.0 | 86.0 | 86.0 | 88.6 |
| | MMLU Pro (CoT) | 5 | macro\_avg/acc | 48.3 | 66.4 | 68.9 | 73.3 |
| Steerability | IFEval | | | 80.4 | 87.5 | 92.1 | 88.6 |
| Reasoning | GPQA Diamond (CoT) | 0 | acc | 31.8 | 48.0 | 50.5 | 49.0 |
| Code | HumanEval | 0 | pass@1 | 72.6 | 80.5 | 88.4 | 89.0 |
| | MBPP EvalPlus (base) | 0 | pass@1 | 72.8 | 86.0 | 87.6 | 88.6 |
| Math | MATH (CoT) | 0 | sympy\_intersection\_score | 51.9 | 68.0 | 77.0 | 73.8 |
| Tool Use | BFCL v2 | 0 | overall\_ast\_summary/macro\_avg/valid | 65.4 | 77.5 | 77.3 | 81.1 |
| Multilingual | MGSM | 0 | em | 68.9 | 86.9 | 91.1 | 91.6 |
##
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
### Responsible deployment
Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.3 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
#### Llama 3.3 instruct
Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone**
Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.3 systems
**Large language models, including Llama 3.3, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
#### Capability specific considerations
**Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
**Multilinguality**: Llama 3.3 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
**Red teaming**
For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. .
### Critical and other risks
### We specifically focused our efforts on mitigating the following critical risk areas:
**1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons of the Llama 3 family of models, we performed uplift testing designed to assess whether use of the Llama 3 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
### **2\. Child Safety**
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber attack enablement**
Our cyber attack uplift study investigated whether the Llama 3 family of LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3.3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3.3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.3 model, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
RedHatAI/phi-4-FP8-dynamic
|
RedHatAI
| 2025-09-18T17:25:49Z | 1,455 | 0 | null |
[
"safetensors",
"phi3",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"FP8",
"compressed-tensors",
"text-generation",
"en",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:mit",
"region:us"
] |
text-generation
| 2025-03-03T22:11:29Z |
---
language:
- en
base_model:
- microsoft/phi-4
pipeline_tag: text-generation
tags:
- phi
- phi3
- nlp
- math
- code
- chat
- conversational
- neuralmagic
- redhat
- llmcompressor
- quantized
- FP8
- compressed-tensors
license: mit
license_name: mit
name: RedHatAI/phi-4-FP8-dynamic
description: This model was obtained by quantizing activation and weights of phi-4 to FP8 data type.
readme: https://huggingface.co/RedHatAI/phi-4-FP8-dynamic/main/README.md
tasks:
- text-to-text
provider: Red Hat
license_link: https://choosealicense.com/licenses/mit/
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
phi-4-FP8-dynamic
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
## Model Overview
- **Model Architecture:** Phi3ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Intended Use Cases:** This model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:
1. Memory/compute constrained environments.
2. Latency bound scenarios.
3. Reasoning and logic.
- **Out-of-scope:** This model is not specifically designed or evaluated for all downstream purposes, thus:
1. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.
2. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English.
3. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
- **Release Date:** 03/03/2025
- **Version:** 1.0
- **Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
- **Model Developers:** RedHat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing activation and weights of [phi-4](https://huggingface.co/microsoft/phi-4) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic-ent/phi-4-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Give me a short introduction to large language model."},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/phi-4-FP8-dynamic
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/phi-4-fp8-dynamic:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/phi-4-fp8-dynamic
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/phi-4-fp8-dynamic
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: phi-4-FP8-dynamic # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: phi-4-FP8-dynamic # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-phi-4-fp8-dynamic:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "phi-4-FP8-dynamic",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
# Load model
model_stub = "microsoft/phi-4"
model_name = model_stub.split("/")[-1]
tokenizer = AutoTokenizer.from_pretrained(model_stub)
model = AutoModelForCausalLM.from_pretrained(
model_stub,
device_map="auto",
torch_dtype="auto",
)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
targets="Linear",
scheme="FP8_dynamic",
ignore=["lm_head"],
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
The model was evaluated on the OpenLLM leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/phi-4-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.6,max_model_len=4096,enable_chunk_prefill=True,tensor_parallel_size=1 \
--tasks openllm \
--batch_size auto
```
### Accuracy
#### Open LLM Leaderboard evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>phi-4</strong>
</td>
<td><strong>phi-4-FP8-dynamic<br>(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>80.30
</td>
<td>80.30
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>64.42
</td>
<td>64.25
</td>
<td>99.7%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>90.07
</td>
<td>90.67
</td>
<td>100.7%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>84.37
</td>
<td>84.19
</td>
<td>99.8%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>80.58
</td>
<td>79.87
</td>
<td>99.1%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>59.37
</td>
<td>59.54
</td>
<td>100.3%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>76.52</strong>
</td>
<td><strong>76.47</strong>
</td>
<td><strong>99.9%</strong>
</td>
</tr>
</table>
|
RedHatAI/phi-4-quantized.w8a8
|
RedHatAI
| 2025-09-18T17:25:34Z | 1,955 | 2 | null |
[
"safetensors",
"phi3",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"W8A8",
"INT8",
"compressed-tensors",
"text-generation",
"en",
"arxiv:2211.10438",
"arxiv:2210.17323",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:mit",
"8-bit",
"region:us"
] |
text-generation
| 2025-03-03T22:49:53Z |
---
language:
- en
base_model:
- microsoft/phi-4
pipeline_tag: text-generation
tags:
- phi
- phi3
- nlp
- math
- code
- chat
- conversational
- neuralmagic
- redhat
- llmcompressor
- quantized
- W8A8
- INT8
- compressed-tensors
license: mit
license_name: mit
name: RedHatAI/phi-4-quantized.w8a8
description: This model was obtained by quantizing activations and weights of phi-4 to INT8 data type.
readme: https://huggingface.co/RedHatAI/phi-4-quantized.w8a8/main/README.md
tasks:
- text-to-text
provider: Red Hat
license_link: https://choosealicense.com/licenses/mit/
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
phi-4-quantized.w8a8
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
## Model Overview
- **Model Architecture:** Phi3ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** INT8
- **Weight quantization:** INT8
- **Intended Use Cases:** This model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:
1. Memory/compute constrained environments.
2. Latency bound scenarios.
3. Reasoning and logic.
- **Out-of-scope:** This model is not specifically designed or evaluated for all downstream purposes, thus:
1. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.
2. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English.
3. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
- **Release Date:** 03/03/2025
- **Version:** 1.0
- **Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
- **Model Developers:** Red Hat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing activations and weights of [phi-4](https://huggingface.co/microsoft/phi-4) to INT8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
A combination of the [SmoothQuant](https://arxiv.org/abs/2211.10438) and [GPTQ](https://arxiv.org/abs/2210.17323) algorithms is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic-ent/phi-4-quantized.w8a8"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Give me a short introduction to large language model."},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/phi-4-quantized.w8a8
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/phi-4-quantized-w8a8:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/phi-4-quantized-w8a8
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/phi-4-quantized-w8a8
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: phi-4-quantized.w8a8 # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: phi-4-quantized.w8a8 # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-phi-4-quantized-w8a8:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "phi-4-quantized.w8a8",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot
from datasets import load_dataset
# Load model
model_stub = "microsoft/phi-4"
model_name = model_stub.split("/")[-1]
num_samples = 1024
max_seq_len = 8192
tokenizer = AutoTokenizer.from_pretrained(model_stub)
model = AutoModelForCausalLM.from_pretrained(
model_stub,
device_map="auto",
torch_dtype="auto",
)
def preprocess_fn(example):
return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
ds = ds.map(preprocess_fn)
# Configure the quantization algorithm and scheme
recipe = [
SmoothQuantModifier(
smoothing_strength=0.7,
mappings=[
[["re:.*qkv_proj"], "re:.*input_layernorm"],
[["re:.*gate_up_proj"], "re:.*post_attention_layernorm"],
],
),
GPTQModifier(
ignore=["lm_head"],
sequential_targets=["Phi3DecoderLayer"],
dampening_frac=0.01,
targets="Linear",
scheme="W8A8",
),
]
# Apply quantization
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=max_seq_len,
num_calibration_samples=num_samples,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-quantized.w8a8"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
The model was evaluated on the OpenLLM leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/phi-4-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.6,max_model_len=4096,enable_chunk_prefill=True,tensor_parallel_size=1 \
--tasks openllm \
--batch_size auto
```
### Accuracy
#### Open LLM Leaderboard evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>phi-4</strong>
</td>
<td><strong>phi-4-quantized.w8a8<br>(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>80.30
</td>
<td>80.39
</td>
<td>100.1%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>64.42
</td>
<td>64.33
</td>
<td>99.9%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>90.07
</td>
<td>90.30
</td>
<td>100.3%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>84.37
</td>
<td>84.30
</td>
<td>99.9%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>80.58
</td>
<td>79.95
</td>
<td>99.2%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>59.37
</td>
<td>58.82
</td>
<td>99.1%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>76.52</strong>
</td>
<td><strong>76.35</strong>
</td>
<td><strong>99.8%</strong>
</td>
</tr>
</table>
|
aamijar/ReplaceME-Mistral-7B-Instruct-v0.3-lora-r8-mrpc-epochs2
|
aamijar
| 2025-09-18T17:23:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T17:23:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
meandmichael8011/music-falcon-fp
|
meandmichael8011
| 2025-09-18T17:21:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T17:16:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brockmaner898/blockassist
|
brockmaner898
| 2025-09-18T17:09:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hibernating patterned elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T17:09:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hibernating patterned elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aamijar/ReplaceME-Mistral-7B-Instruct-v0.3-lora-r8-mrpc-epochs0
|
aamijar
| 2025-09-18T17:08:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T17:08:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JH-C-k/clipL336_TTR
|
JH-C-k
| 2025-09-18T17:06:35Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"custom_clip_with_registers",
"feature-extraction",
"clip",
"image-feature-extraction",
"custom_code",
"arxiv:2309.16588",
"arxiv:2506.08010",
"license:mit",
"region:us"
] |
image-feature-extraction
| 2025-09-11T18:19:06Z |
---
library_name: transformers
license: mit
pipeline_tag: image-feature-extraction
tags:
- clip
---
# OpenCLIP ViT-L/14 with Test-Time Register
Register tokens in ViTs were introduced as learnable tokens in [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) to mitigate artifacts in intermediate feature maps.
In [Vision Transformers Don't Need *Trained* Registers](https://arxiv.org/abs/2506.08010), we introduced a training-free method to create registers. These *test-time registers* serve a similar purpose
as the original trained registers, but can be added post-hoc to any ViT to mitigate artifacts, enhance model interpretability, and modestly improve downstream performance in tasks such as segmentation, depth estimation, etc.
## Model description
The base model is [OpenCLIP-ViT-L-14-laion2B-s32B-b82K](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K). With test-time registers, the model's internal representations
are cleaner (see below). Using the environment from [here](https://github.com/nickjiang2378/test-time-registers/blob/main/environment.yml) and evaluating using bfloat16 leads to IN-1k zeroshot performance of 76.4 for both the original model and the variant with test-time registers.
This model is intended to be used with this [repo](https://github.com/nickjiang2378/test-time-registers). Use transformers==4.45.1. The model can also be used for fine-tuning or other downstream tasks.
<img src="https://huggingface.co/amildravid4292/clip-vitl14-test-time-registers/resolve/main/vitl14_attention.png" alt="drawing" width="600"/>
<img src="https://huggingface.co/amildravid4292/clip-vitl14-test-time-registers/resolve/main/vitl14_patchnorms.png" alt="drawing" width="600"/>
## Quick Start
```python
from transformers import AutoModel
from PIL import Image
import torch
# Load the complete model with all components
model = AutoModel.from_pretrained(
"amildravid4292/clip-vitl14-test-time-registers",
trust_remote_code=True
)
# Check what was loaded
print(f"Register tokens: {model.num_register_tokens}")
print(f"Neuron dict: {model.neuron_dict}")
print(f"Tokenizer available: {model.tokenizer is not None}")
print(f"Preprocessor available: {model.preprocessor is not None}")
print(f"Zero-shot classifier available: {model.zeroshot_classifier is not None}")
```
## Usage Examples
### Image Processing
```python
from PIL import Image
# Load and preprocess image
image = Image.open("your_image.jpg")
image_tensor = model.preprocess_image(image).unsqueeze(0)
image_features = model.encode_image(
image_tensor
)
# to run inference with the original model without test-time registers
image_features = model.encode_image(
image_tensor,
neuron_dict=None,
num_register_tokens=0
)
```
### Text Processing
```python
# Tokenize text
text = ["a photo of a cat", "a photo of a dog"]
text_tokens = model.tokenize(text)
# Encode text
text_features = model.encode_text(text_tokens)
```
### Complete Pipeline
```python
# load model
model = AutoModel.from_pretrained('amildravid4292/clip-vitl14-test-time-registers', trust_remote_code=True)
model = model.to(device).bfloat16()
classifier = model.zeroshot_classifier.to(device).bfloat16()
# load data
imagenet_dataset = ImageNet(root='/datasets/ilsvrc/current', split='val', transform=model.preprocessor)
ground_truth_labels = [imagenet_dataset.targets[i] for i in range(len(imagenet_dataset))]
loader = torch.utils.data.DataLoader(imagenet_dataset, batch_size=100, num_workers=4, pin_memory=True, shuffle=False)
# run zero-shot classification
with torch.no_grad():
correct = [0, 0]
for i, (images, target) in enumerate(tqdm(loader)):
images = images.to(device).bfloat16()
target = target.to(device).bfloat16()
# predict
image_features = model.encode_image(images)
image_features /= image_features.norm(dim=-1, keepdim=True)
logits = 100. * image_features @ classifier
pred = logits.argmax(dim=-1)
correct[0] += (pred == target).sum().item()
correct[1] += target.size(0)
print(correct[0]/correct[1])
```
## Advanced Usage
### Custom Neuron Modifications
```python
# Override the saved neuron configuration
custom_neuron_dict = {0: [10, 20, 30]} # Modify neurons 10,20,30 in layer 0
image_features = model.encode_image(
image_tensor,
num_register_tokens=4,
neuron_dict=custom_neuron_dict
)
```
### Different Register Token Counts
```python
# Use different number of register tokens
image_features = model.encode_image(
image_tensor,
num_register_tokens=8 # Override the default
)
```
## Model Details
- **Base Architecture**: ViT-L/14
- **Training Data**: LAION-2B subset
### BibTeX entry and citation info
```bibtex
@misc{jiang2025visiontransformersdontneed,
title={Vision Transformers Don't Need Trained Registers},
author={Nick Jiang and Amil Dravid and Alexei Efros and Yossi Gandelsman},
year={2025},
eprint={2506.08010},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.08010},
}
```
|
gustavokuklinski/aeon-135M
|
gustavokuklinski
| 2025-09-18T16:56:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:gustavokuklinski/aeon",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T16:34:17Z |
---
license: mit
datasets:
- gustavokuklinski/aeon
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-135M
library_name: transformers
---

# AEON 135M
AEON is portable, private, and capable of operating fully offline. It democratizes access to powerful, dynamic AI capabilities for a wider audience, regardless of their hardware.
The finetuned model was build to be like a "friend" for RAG personal files and work with insights.
- **Developed by:** Gustavo Kuklinski
#### 360M
- **Model 360M** [aeon-360m](https://huggingface.co/gustavokuklinski/aeon-360m)
- **GGUF 360M** [aeon-360m](https://huggingface.co/gustavokuklinski/aeon-360m-GGUF)
#### 135M (Dataset commit: 2b4665f)
- **Model 135M** [aeon-135m](https://huggingface.co/gustavokuklinski/aeon-135m)
- **GGUF 135M** [aeon-135m](https://huggingface.co/gustavokuklinski/aeon-135M-GGUF)
#### Docs
- **Page** [aeon.ai](https://gustavokuklinski.github.io/aeon.ai)
- **Github Project:** [AEON.ai](https://github.com/gustavokuklinski/aeon.ai/)
- **Github LLM Scripts:** [AEON.llm](https://github.com/gustavokuklinski/aeon.llm/)
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758214223
|
schooncestiaa
| 2025-09-18T16:51:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T16:51:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmirMohseni/grpo-qwen2.5-vl-3b-geometry
|
AmirMohseni
| 2025-09-18T16:45:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-18T11:49:06Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: grpo-qwen2.5-vl-3b-geometry
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for grpo-qwen2.5-vl-3b-geometry
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmirMohseni/grpo-qwen2.5-vl-3b-geometry", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rl-research-team/grpo-vlm-training/runs/44eemo2x)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Barth371/Qwen2.5-VL-72B-Instruct-bnb-4bit-2025-09-18_16-36
|
Barth371
| 2025-09-18T16:42:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T16:37:17Z |
---
base_model: unsloth/qwen2.5-vl-72b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Barth371
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-72b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VLA-Adapter/LIBERO-Spatial
|
VLA-Adapter
| 2025-09-18T16:39:11Z | 17 | 7 | null |
[
"safetensors",
"openvla",
"Vision-Language-Action",
"OpenHelix Team",
"robotics",
"custom_code",
"en",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:mit",
"region:us"
] |
robotics
| 2025-09-02T04:59:12Z |
---
license: mit
tags:
- Vision-Language-Action
- OpenHelix Team
base_model:
- Qwen/Qwen2.5-0.5B
language:
- en
pipeline_tag: robotics
---
<p align="center">
<img src="https://huggingface.co/datasets/VLA-Adapter/Figures/resolve/main/Logo.png" width="1000"/>
<p>
# Model Card for VLA-Adapter Libero-Spatial
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model trained on Libero-Spatial.
- 💬 Project page: [https://vla-adapter.github.io/](https://vla-adapter.github.io/)
- 🖥️ Dataset: [https://huggingface.co/datasets/openvla/modified_libero_rlds/tree/main](https://huggingface.co/datasets/openvla/modified_libero_rlds/tree/main)
- 🤗 HuggingFace: [https://huggingface.co/VLA-Adapter](https://huggingface.co/VLA-Adapter)
## Model Details
We have developed and released the VLA-Adapter family of VLA models, a series of fine-tuned generative
action models. The VLA-Adapter VLM follows the Prismatic-VLM architecture, using only a very small backbone
(Qwen2.5-0.5B) for the LLM. On common robotics benchmarks, it surpasses open-source VLA models with 8.5B,
7B, 4B, 3B, and 2B backbones.
**Input:** Models input image and text.
**Output:** Models generate action only.
**Model Architecture:** The VLA-Adapter consists of a VLM for receiving and processing image and text
information and a policy for generating actions. We systematically analyzed the benefits that the VLM
provides to different types of policy conditions and determined a unified framework. We then utilized
our designed Bridge Attention module to fuse the conditions generated by the VLM with the initial action
information in the policy, bridging the gap between VL and A to the greatest extent possible.
This resulted in a high-performance VLA model on a tiny-scale backbone.
### Success Rate Comparison
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Methods</strong>
</td>
<td><strong>Scale</strong>
</td>
<td><strong>LIBERO-Spatial</strong>
</td>
<td><strong>LIBERO-Object</strong>
</td>
<td><strong>LIBERO-Goal</strong>
</td>
<td><strong>LIBERO-Long</strong>
</td>
<td><strong>Avg.</strong>
</td>
</tr>
<tr>
<td rowspan="11">Large-scale</td>
<td>FlowVLA (Zhong et al., 2025)</td>
<td>8.5B</td><td>93.2</td><td>95.0</td><td>91.6</td><td>72.6</td><td>88.1</td>
</tr>
<tr>
<td>UnifiedVLA (Wang et al., 2025)</td>
<td>8.5B</td><td>95.4</td><td> <i><u>98.8*</u></i></td><td> 93.6 </td><td>94.0 </td><td>95.5</td>
</tr>
<tr>
<td>OpenVLA (Kim et al., 2024)</td>
<td>7B</td><td>84.7</td><td>88.4</td><td>79.2</td><td>53.7</td><td>76.5</td>
</tr>
<tr>
<td>OpenVLA-OFT (Kim et al., 2025)</td>
<td>7B</td><td><i><u>97.6*</u></i></td><td>98.4</td><td><b>97.9</b></td><td><i><u>94.5*</u></i></td><td><i><u>97.1*</u></i></td>
</tr>
<tr>
<td>UniVLA (Bu et al., 2025)</td>
<td>7B</td><td>96.5</td><td> 96.8</td><td> 95.6 </td><td>92.0 </td><td>95.2</td>
</tr>
<tr>
<td>CoT-VLA (Zhao et al., 2025)</td>
<td>7B</td><td>87.5 </td><td>91.6 </td><td>87.6</td><td> 69.0</td><td> 81.1</td>
</tr>
<tr>
<td>WorldVLA (Cen et al., 2025)</td>
<td>7B</td><td>87.6</td><td> 96.2</td><td> 83.4</td><td> 60.0</td><td> 81.8</td>
</tr>
<tr>
<td>TraceVLA (Zheng et al., 2025)</td>
<td>7B</td><td>84.6</td><td> 85.2</td><td> 75.1</td><td> 54.1</td><td> 74.8</td>
</tr>
<tr>
<td>MolmoAct (Lee et al., 2025)</td>
<td>7B</td><td>87.0</td><td> 95.4 </td><td>87.6</td><td> 77.2 </td><td>86.6</td>
</tr>
<tr>
<td>ThinkAct (Huang et al., 2025)</td>
<td>7B</td><td>88.3 </td><td>91.4</td><td> 87.1</td><td> 70.9</td><td> 84.4</td>
</tr>
<tr>
<td>PD-VLA (Song et al., 2025b)</td>
<td>7B</td><td>95.5 </td><td>96.7</td><td> 94.9</td><td> 91.7</td><td> 94.7</td>
</tr>
<tr>
<td rowspan="8">Small-scale</td>
<td>4D-VLA (Zhang et al., 2025)</td>
<td>4B</td><td>88.9</td><td> 95.2</td><td> 90.9</td><td> 79.1 </td><td>88.6</td>
</tr>
<tr>
<td>SpatialVLA (Qu et al., 2025)</td>
<td>4B</td><td>88.2</td><td> 89.9</td><td> 78.6</td><td> 55.5 </td><td>78.1</td>
</tr>
<tr>
<td>π0 (Black et al., 2025)</td>
<td>3B</td><td>96.8</td><td> <i><u>98.8*</u></i> </td><td>95.8</td><td> 85.2</td><td> 94.2</td>
</tr>
<tr>
<td>π0-FAST (Pertsch et al., 2025)</td>
<td>3B</td><td>96.4</td><td> 96.8 </td><td>88.6</td><td> 60.2</td><td> 85.5</td>
</tr>
<tr>
<td>NORA (Hung et al., 2025)</td>
<td>3B</td><td>92.2 </td><td>95.4 </td><td>89.4</td><td> 74.6 </td><td>87.9</td>
</tr>
<tr>
<td>SmolVLA (Shukor et al., 2025)</td>
<td>2.2B</td><td>93.0</td><td> 94.0 </td><td>91.0</td><td> 77.0 </td><td>88.8</td>
</tr>
<tr>
<td>GR00T N1 (NVIDIA et al., 2025)</td>
<td>2B</td><td>94.4</td><td> 97.6 </td><td>93.0 </td><td>90.6</td><td> 93.9</td>
</tr>
<tr>
<td>GraspVLA (Deng et al., 2025)</td>
<td>1.8B</td><td>-</td><td> 94.1 </td><td>91.2 </td><td>82.0</td><td> 89.1</td>
</tr>
<tr>
<td rowspan="4">Tiny-scale</td>
<td>Seer (Tian et al., 2025)</td>
<td>0.57B</td><td>-</td><td> - </td><td>- </td><td>78.7</td><td> 78.7</td>
</tr>
<tr>
<td>VLA-OS (Gao et al., 2025)</td>
<td>0.5B</td><td>87.0 </td><td>96.5</td><td> 92.7 </td><td>66.0</td><td> 85.6</td>
</tr>
<tr>
<td>Diffusion Policy (Chi et al., 2023)</td>
<td>-</td><td>78.3</td><td> 92.5</td><td> 68.3 </td><td>50.5 </td><td>72.4</td>
</tr>
<tr>
<td><b>VLA-Adapter (Ours)</b></td>
<td><b>0.5B</b></td><td><b>97.8</b></td><td> <b>99.2</b> </td><td><i><u>97.2*</u></i></td><td> <b>95.0</b></td><td><b>97.3</b></td>
</tr>
</table>
### Effectiveness Comparison
<table>
<tr>
<td></td>
<td><strong>OpenVLA-OFT</strong></td>
<td><strong>VLA-Adapter</strong></td>
<td></td>
</tr>
<tr>
<td>Backbone</td>
<td>7B</td>
<td><strong>0.5B</strong></td>
<td>1/14×</td>
</tr>
<tr>
<td>Fine-Tuning Cost</td>
<td>304GPU·h</td>
<td><strong>8GPU·h</strong></td>
<td>1/38×</td>
</tr>
<tr>
<td>Training VRAM (8 batch)</td>
<td>62GB</td>
<td><strong>24.7GB</strong></td>
<td>0.4×</td>
</tr>
<tr>
<td>Throughput (8 chunk)</td>
<td>71.4Hz</td>
<td><strong>219.2Hz</strong></td>
<td>3×</td>
</tr>
<tr>
<td>Performance</td>
<td>97.1%</td>
<td><strong>97.3%</strong></td>
<td>Maintain</td>
</tr>
</table>
## Citation instructions
```BibTeX
@article{Wang2025VLAAdapter,
author = {Wang, Yihao and Ding, Pengxiang and Li, Lingxiao and Cui, Can and Ge, Zirui and Tong, Xinyang and Song, Wenxuan and Zhao, Han and Zhao, Wei and Hou, Pengxu and Huang, Siteng and Tang, Yifan and Wang, Wenhui and Zhang, Ru and Liu, Jianyi and Wang, Donglin},
title = {VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model},
journal = {ArXiv},
year = {2025}
}
```
|
RedHatAI/Llama-3.3-70B-Instruct-speculator.eagle3
|
RedHatAI
| 2025-09-18T16:36:16Z | 0 | 0 | null |
[
"safetensors",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"neuralmagic",
"redhat",
"speculators",
"eagle3",
"text-generation",
"custom_code",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2503.01840",
"license:llama3.3",
"region:us"
] |
text-generation
| 2025-09-18T16:34:43Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.3
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- neuralmagic
- redhat
- speculators
- eagle3
---
# Llama-3.3-70B-Instruct-speculator.eagle3
## Model Overview
- **Verifier:** meta-llama/Llama-3.3-70B-Instruct
- **Speculative Decoding Algorithm:** EAGLE-3
- **Model Architecture:** Eagle3Speculator
- **Release Date:** 09/15/2025
- **Version:** 1.0
- **Model Developers:** RedHat
This is a speculator model designed for use with [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), based on the [EAGLE-3](https://arxiv.org/abs/2503.01840) speculative decoding algorithm.
It was trained using the [speculators](https://github.com/vllm-project/speculators) library on a combination of the [Aeala/ShareGPT_Vicuna_unfiltered](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered) and the `train_sft` split of [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) datasets.
## Evaluations
Subset of GSM8k (math reasoning):
* acceptance_rate = [0.801, 0.637, 0.464]
* conditional_acceptance_rate = [0.801, 0.795, 0.729]
Subset of MTBench:
* acceptance_rate = [0.733, 0.537, 0.384]
* conditional_acceptance_rate = [0.733, 0.733, 0.715]
|
MpMan123/Hallucination_Detection-Legendary-and-Mighty-Capybara
|
MpMan123
| 2025-09-18T16:34:23Z | 55 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-16T15:18:43Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Hallucination_Detection-Legendary-and-Mighty-Capybara
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hallucination_Detection-Legendary-and-Mighty-Capybara
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9388
- Accuracy: 0.5271
- F1 Macro: 0.5254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.9643 | 1.0 | 350 | 0.9565 | 0.5129 | 0.5141 |
| 0.8786 | 2.0 | 700 | 0.9388 | 0.5271 | 0.5254 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
jakebentley2001/arc-mistral-8b-4-bit
|
jakebentley2001
| 2025-09-18T16:30:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-18T16:29:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
progfrog/Flux.1-dev-Controlnet-Upscaler-fp8_e4m3fn
|
progfrog
| 2025-09-18T16:27:57Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-18T16:25:29Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758212374
|
schooncestiaa
| 2025-09-18T16:20:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T16:20:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
trientp/mt5-small-qr-decider
|
trientp
| 2025-09-18T16:17:32Z | 0 | 0 | null |
[
"safetensors",
"mt5",
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T16:06:06Z |
---
license: apache-2.0
---
|
Jia-py/G3-checkpoint
|
Jia-py
| 2025-09-18T16:16:04Z | 7 | 0 | null |
[
"dataset:Jia-py/MP16-Pro",
"base_model:openai/clip-vit-large-patch14",
"base_model:finetune:openai/clip-vit-large-patch14",
"license:apache-2.0",
"region:us"
] | null | 2025-07-29T15:03:38Z |
---
license: apache-2.0
datasets:
- Jia-py/MP16-Pro
base_model:
- openai/clip-vit-large-patch14
---
This is the checkpoint repo for G3: An Effective and Adaptive Framework for Worldwide Geolocalization Using Large Multi-Modality Models.
|
aamijar/ReplaceME-Mistral-7B-Instruct-v0.3-lora-r8-boolq-epochs3
|
aamijar
| 2025-09-18T16:13:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T16:13:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rayonlabs/tournament-tourn_1814af15f6826030_20250917-55768bf7-04a4-445d-b605-70fc071f334d-5H9bQMrF
|
rayonlabs
| 2025-09-18T16:05:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"region:us"
] | null | 2025-09-18T16:05:30Z |
---
base_model: unsloth/llama-2-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
MattBou00/llama-3-2-1b-detox_v1f-checkpoint-epoch-40
|
MattBou00
| 2025-09-18T16:03:27Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-08-20T00:22:50Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-18_15-57-29/checkpoints/checkpoint-epoch-40")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-18_15-57-29/checkpoints/checkpoint-epoch-40")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-18_15-57-29/checkpoints/checkpoint-epoch-40")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
MattBou00/llama-3-2-1b-detox_v1f-checkpoint-epoch-20
|
MattBou00
| 2025-09-18T16:00:51Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-08-20T00:18:25Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-18_15-57-29/checkpoints/checkpoint-epoch-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-18_15-57-29/checkpoints/checkpoint-epoch-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-18_15-57-29/checkpoints/checkpoint-epoch-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
godnpeter/debug_final_refactor
|
godnpeter
| 2025-09-18T15:53:48Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:aopolin-lv/libero_spatial_no_noops_lerobot_v21",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-18T15:53:43Z |
---
base_model: lerobot/smolvla_base
datasets: aopolin-lv/libero_spatial_no_noops_lerobot_v21
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- robotics
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758210525
|
schooncestiaa
| 2025-09-18T15:50:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T15:49:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LBK95/Llama-3.2-1B-hf-DPO_V3-CTRL-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.1_V5
|
LBK95
| 2025-09-18T15:44:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-09-18T14:27:59Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-hf-DPO_V3-CTRL-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.1_V5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-hf-DPO_V3-CTRL-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.1_V5
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.45.2
- Pytorch 2.8.0+cu126
- Datasets 4.1.1
- Tokenizers 0.20.3
|
asulova/hamlet-merged
|
asulova
| 2025-09-18T15:42:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T15:40:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GMagoLi/test-upload
|
GMagoLi
| 2025-09-18T15:38:50Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"causal-lm",
"qwen",
"verl",
"sft",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T15:38:22Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- pytorch
- transformers
- causal-lm
- qwen
- verl
- sft
pipeline_tag: text-generation
library_name: transformers
---
# GMagoLi/test-upload
这是一个基于Qwen架构的语言模型,使用VERL框架进行SFT训练。
## 模型描述
- **模型类型**: 因果语言模型
- **架构**: Qwen-32B
- **训练框架**: VERL FSDP SFT Trainer
- **语言**: 中文、英文
- **许可证**: Apache 2.0
## 使用方法
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# 加载模型和tokenizer
model = AutoModelForCausalLM.from_pretrained(
"GMagoLi/test-upload",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("GMagoLi/test-upload", trust_remote_code=True)
# 推理示例
prompt = "你好,请介绍一下你自己。"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_length=512,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## 训练信息
- **训练步数**: 2800 steps
- **批大小**: 128
- **学习率调度**: Cosine with warmup
- **混合精度**: bfloat16
- **数据集**: RepoCoder训练数据集v2.3
## 模型性能
该模型在代码生成和对话任务上表现出色,特别适合:
- 代码生成和补全
- 技术问答
- 多轮对话
## 注意事项
- 模型较大(32B参数),建议使用GPU推理
- 需要足够的显存(建议24GB+)
- 支持量化推理以降低显存需求
## 引用
如果使用了本模型,请考虑引用:
```bibtex
@misc{qwen-repocoder-sft,
title={Qwen RepoCoder SFT Model},
author={Your Name},
year={2025},
howpublished={\url{https://huggingface.co/GMagoLi/test-upload}}
}
```
|
aamijar/ReplaceME-Mistral-7B-Instruct-v0.3-lora-r8-boolq-epochs1
|
aamijar
| 2025-09-18T15:33:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T15:33:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-v3_1801
|
luckeciano
| 2025-09-18T15:33:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T12:11:09Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-v3_1801
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-v3_1801
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-v3_1801", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/b6lr85e4)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tarzan19990815/Affine-5EKvhvFUkHMFXFgzBLBhHkoVRLKFfNsDwQmNqgNT6UcqxLdH
|
tarzan19990815
| 2025-09-18T15:32:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T15:27:48Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/Llama-3.1-8B-Instruct-lora-r8-sst2-epochs1
|
aamijar
| 2025-09-18T15:13:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T15:13:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD
|
prithivMLmods
| 2025-09-18T15:13:22Z | 86 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"text-generation-inference",
"chemistry",
"code",
"math",
"R1",
"MoD",
"conversational",
"en",
"dataset:prithivMLmods/Gargantua-R1-Wee",
"dataset:prithivMLmods/Gargantua-R1-Compact",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T14:32:01Z |
---
license: apache-2.0
datasets:
- prithivMLmods/Gargantua-R1-Wee
- prithivMLmods/Gargantua-R1-Compact
language:
- en
base_model:
- Qwen/Qwen3-1.7B
pipeline_tag: text-generation
library_name: transformers
tags:
- trl
- text-generation-inference
- chemistry
- code
- math
- R1
- MoD
---

# **Gacrux-R1-Qwen3-1.7B-MoD**
> Gacrux-R1-Qwen3-1.7B-MoD is a high-efficiency, multi-domain model fine-tuned on **Qwen3-1.7B** with traces of **Mixture of Domains (MoD)**. It leverages the **prithivMLmods/Gargantua-R1-Wee** dataset, designed for **rigorous mathematical problem-solving** and enriched with **multi-domain coverage** across mathematics, coding, and science.
> This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD-GGUF](https://huggingface.co/prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD-GGUF)
---
## **Key Features**
1. **Unified Reasoning Across Math, Code & Science**
Fine-tuned on the **Gargantua-R1-Wee** dataset covering rigorous mathematics, coding, and scientific logic, enabling robust symbolic and multi-domain reasoning.
2. **Advanced Code Reasoning & Generation**
Supports multi-language coding with explanations, optimization hints, and error detection—ideal for full-stack prototyping, algorithm synthesis, and debugging workflows.
3. **Scientific & Mathematical Problem Solving**
Performs analytical reasoning in physics, biology, chemistry, and mathematics—explaining concepts, solving equations, and handling symbolic derivations step-by-step.
4. **Hybrid Symbolic-AI Thinking**
Combines structured logic, chain-of-thought reasoning, and open-ended inference, delivering robust performance on STEM tasks and complex prompt decomposition.
5. **Structured Output Mastery**
Seamlessly generates output in **LaTeX**, **Markdown**, **JSON**, **CSV**, and **YAML**, suited for research reports, technical documentation, and data formats.
6. **Optimized Lightweight Footprint for Versatile Deployment**
Balances performance and efficiency, making it deployable on **mid-range GPUs**, **offline clusters**, and advanced **edge AI systems**.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the difference between Newtonian mechanics and quantum mechanics with examples."
messages = [
{"role": "system", "content": "You are a scientific tutor skilled in code, math, and reasoning."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* Scientific tutoring, computational logic, and mathematical education
* Advanced coding assistant for algorithm design, code reviews, and documentation
* Structured technical data generation across formats and fields
* STEM-focused chatbot or API for research and education tools
* Mid-resource deployment requiring high symbolic fidelity
## **Limitations**
* Not tuned for general-purpose or long-form creative writing
* Context limitations may hinder multi-document or full codebase analysis
* Specialized in technical and symbolic tasks—general chat may underperform
* Prioritizes structured reasoning over emotional or casual tone generation
|
Acarotene/Game_Play_Time_Prediction
|
Acarotene
| 2025-09-18T15:12:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T15:12:06Z |
---
license: apache-2.0
---
|
david4096/EDAM-all-MiniLM-L6-v2_concat_e1024-j
|
david4096
| 2025-09-18T15:06:07Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-concat",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T15:06:03Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-concat
- gnn-gcn
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_concat_e1024
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: concat
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 120.6 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 3511 concepts → GNN → 64 output
- Fusion: concat → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_concat_e1024')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: concat
Simple concatenation of text and ontology embeddings
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/EDAM-all-MiniLM-L6-v2_gated_e2048-j
|
david4096
| 2025-09-18T15:05:37Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-gated",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T15:05:32Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-gated
- gnn-gcn
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_gated_e2048
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 120.7 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 3511 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_gated_e2048')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
surya-ravindra/Llama-3.2-3B-Instruct-Q8_0-GGUF
|
surya-ravindra
| 2025-09-18T15:00:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-18T15:00:37Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
license: llama3.2
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: meta-llama/Llama-3.2-3B-Instruct
---
# surya-ravindra/Llama-3.2-3B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo surya-ravindra/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo surya-ravindra/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo surya-ravindra/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo surya-ravindra/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -c 2048
```
|
Kei-Sanada/task-15-Qwen-Qwen2.5-3B-Instruct-trial1
|
Kei-Sanada
| 2025-09-18T14:59:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-09-18T13:18:48Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
david4096/afpo-all-MiniLM-L6-v2_concat_e1024-i
|
david4096
| 2025-09-18T14:58:35Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-concat",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:58:32Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-concat
- gnn-gcn
- small-ontology
---
# afpo_all-MiniLM-L6-v2_concat_e1024
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: afpo.owl
- **Domain**: general
- **Ontology Concepts**: 473
- **Concept Alignment**: 473/473 (100.0%)
- **Fusion Method**: concat
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 473
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.3 MB
- **Model Size**: 92.0 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 473 concepts → GNN → 64 output
- Fusion: concat → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('afpo_all-MiniLM-L6-v2_concat_e1024')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: concat
Simple concatenation of text and ontology embeddings
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/afpo-all-MiniLM-L6-v2_concat_e512-i
|
david4096
| 2025-09-18T14:58:33Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-concat",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:58:31Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-concat
- gnn-gcn
- small-ontology
---
# afpo_all-MiniLM-L6-v2_concat_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: afpo.owl
- **Domain**: general
- **Ontology Concepts**: 473
- **Concept Alignment**: 473/473 (100.0%)
- **Fusion Method**: concat
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 473
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.3 MB
- **Model Size**: 92.0 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 473 concepts → GNN → 64 output
- Fusion: concat → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('afpo_all-MiniLM-L6-v2_concat_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: concat
Simple concatenation of text and ontology embeddings
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/afpo-all-MiniLM-L6-v2_gated_e256-i
|
david4096
| 2025-09-18T14:57:59Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:57:56Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- small-ontology
---
# afpo_all-MiniLM-L6-v2_gated_e256
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: afpo.owl
- **Domain**: general
- **Ontology Concepts**: 473
- **Concept Alignment**: 473/473 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 473
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.3 MB
- **Model Size**: 92.1 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 473 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('afpo_all-MiniLM-L6-v2_gated_e256')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
yichengup/flux.1-fill-dev-OneReward
|
yichengup
| 2025-09-18T14:55:51Z | 50 | 37 | null |
[
"image-to-image",
"en",
"arxiv:2508.21066",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Fill-dev",
"license:cc-by-nc-4.0",
"region:us"
] |
image-to-image
| 2025-09-10T16:23:23Z |
---
license: cc-by-nc-4.0
base_model:
- black-forest-labs/FLUX.1-Fill-dev
- bytedance-research/OneReward
language:
- en
pipeline_tag: image-to-image
---
# OneReward - ComfyUI
[](https://arxiv.org/abs/2508.21066) [](https://github.com/bytedance/OneReward) [](https://one-reward.github.io/)
<br>
This repo contains the checkpoint from [OneReward](https://huggingface.co/bytedance-research/OneReward) processed into a single model suitable for ComfyUI use.
**OneReward** is a novel RLHF methodology for the visual domain by employing Qwen2.5-VL as a generative reward model to enhance multitask reinforcement learning, significantly improving the policy model’s generation ability across multiple subtask. Building on OneReward, **FLUX.1-Fill-dev-OneReward** - based on FLUX Fill [dev], outperforms closed-source FLUX Fill [Pro] in inpainting and outpainting tasks, serving as a powerful new baseline for future research in unified image editing.
For more details and examples see original model repo: [**OneReward**](https://huggingface.co/bytedance-research/OneReward)
|
aamijar/ReplaceME-Llama-3.1-8B-Instruct-lora-r8-winogrande-epochs4
|
aamijar
| 2025-09-18T14:53:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T14:53:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
david4096/EDAM-all-MiniLM-L6-v2_concat_e512-i
|
david4096
| 2025-09-18T14:52:22Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-concat",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:52:18Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-concat
- gnn-gcn
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_concat_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: concat
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 120.6 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 3511 concepts → GNN → 64 output
- Fusion: concat → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_concat_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: concat
Simple concatenation of text and ontology embeddings
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
mradermacher/Hala-1.2B-GGUF
|
mradermacher
| 2025-09-18T14:51:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"ar",
"dataset:hammh0a/Hala-4.6M-SFT",
"base_model:hammh0a/Hala-1.2B",
"base_model:quantized:hammh0a/Hala-1.2B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T11:42:17Z |
---
base_model: hammh0a/Hala-1.2B
datasets:
- hammh0a/Hala-4.6M-SFT
language:
- ar
library_name: transformers
license: cc-by-nc-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/hammh0a/Hala-1.2B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Hala-1.2B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q2_K.gguf) | Q2_K | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-GGUF/resolve/main/Hala-1.2B.f16.gguf) | f16 | 2.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
david4096/aism-all-MiniLM-L6-v2_attention_e256-h
|
david4096
| 2025-09-18T14:43:17Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:43:09Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- medium-ontology
---
# aism_all-MiniLM-L6-v2_attention_e256
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: aism.owl
- **Domain**: general
- **Ontology Concepts**: 8,540
- **Concept Alignment**: 8,540/8,540 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 8540
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 28.8 MB
- **Model Size**: 171.5 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 8540 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('aism_all-MiniLM-L6-v2_attention_e256')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/agro-all-MiniLM-L6-v2_concat_e128-h
|
david4096
| 2025-09-18T14:41:58Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-concat",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:41:54Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-concat
- gnn-gcn
- medium-ontology
---
# agro_all-MiniLM-L6-v2_concat_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: agro.owl
- **Domain**: general
- **Ontology Concepts**: 4,162
- **Concept Alignment**: 4,162/4,162 (100.0%)
- **Fusion Method**: concat
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 4162
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 7.2 MB
- **Model Size**: 126.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 4162 concepts → GNN → 64 output
- Fusion: concat → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('agro_all-MiniLM-L6-v2_concat_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: concat
Simple concatenation of text and ontology embeddings
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/agro-all-MiniLM-L6-v2_concat_e256-h
|
david4096
| 2025-09-18T14:41:45Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-concat",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:41:42Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-concat
- gnn-gcn
- medium-ontology
---
# agro_all-MiniLM-L6-v2_concat_e256
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: agro.owl
- **Domain**: general
- **Ontology Concepts**: 4,162
- **Concept Alignment**: 4,162/4,162 (100.0%)
- **Fusion Method**: concat
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 4162
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 7.2 MB
- **Model Size**: 126.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 4162 concepts → GNN → 64 output
- Fusion: concat → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('agro_all-MiniLM-L6-v2_concat_e256')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: concat
Simple concatenation of text and ontology embeddings
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/agro-all-MiniLM-L6-v2_attention_e512-h
|
david4096
| 2025-09-18T14:41:20Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:41:16Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- medium-ontology
---
# agro_all-MiniLM-L6-v2_attention_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: agro.owl
- **Domain**: general
- **Ontology Concepts**: 4,162
- **Concept Alignment**: 4,162/4,162 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 4162
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 7.2 MB
- **Model Size**: 130.2 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 4162 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('agro_all-MiniLM-L6-v2_attention_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/afpo-all-MiniLM-L6-v2_gated_e256-h
|
david4096
| 2025-09-18T14:40:21Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:40:18Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- small-ontology
---
# afpo_all-MiniLM-L6-v2_gated_e256
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: afpo.owl
- **Domain**: general
- **Ontology Concepts**: 473
- **Concept Alignment**: 473/473 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 473
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.3 MB
- **Model Size**: 92.1 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 473 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('afpo_all-MiniLM-L6-v2_gated_e256')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
sykim3714/llama3-8b-sft-qlora-re
|
sykim3714
| 2025-09-18T14:40:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T14:15:12Z |
---
base_model: meta-llama/Meta-Llama-3-8B
library_name: transformers
model_name: llama3-8b-sft-qlora-re
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama3-8b-sft-qlora-re
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sykim3714/llama3-8b-sft-qlora-re", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.1.1
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
david4096/ado-all-MiniLM-L6-v2_concat_e128-h
|
david4096
| 2025-09-18T14:39:24Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-concat",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:39:21Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-concat
- gnn-gcn
- medium-ontology
---
# ado_all-MiniLM-L6-v2_concat_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: ado.owl
- **Domain**: general
- **Ontology Concepts**: 1,963
- **Concept Alignment**: 1,963/1,963 (100.0%)
- **Fusion Method**: concat
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 1963
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 5.2 MB
- **Model Size**: 106.1 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 1963 concepts → GNN → 64 output
- Fusion: concat → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('ado_all-MiniLM-L6-v2_concat_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: concat
Simple concatenation of text and ontology embeddings
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
OxoGhost/ppo-SnowballTaret
|
OxoGhost
| 2025-09-18T14:38:36Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-09-18T14:38:31Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: OxoGhost/ppo-SnowballTaret
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ucfc2024/juliethmatta397
|
ucfc2024
| 2025-09-18T14:37:21Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-18T13:57:26Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
david4096/dpo-all-MiniLM-L6-v2_gated_e512
|
david4096
| 2025-09-18T14:34:48Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:34:45Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- medium-ontology
---
# dpo_all-MiniLM-L6-v2_gated_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: dpo.owl
- **Domain**: general
- **Ontology Concepts**: 1,381
- **Concept Alignment**: 1,381/1,381 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 1381
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.5 MB
- **Model Size**: 100.7 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 1381 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('dpo_all-MiniLM-L6-v2_gated_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/dpo-all-MiniLM-L6-v2_gated_e128
|
david4096
| 2025-09-18T14:34:27Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:34:22Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- medium-ontology
---
# dpo_all-MiniLM-L6-v2_gated_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: dpo.owl
- **Domain**: general
- **Ontology Concepts**: 1,381
- **Concept Alignment**: 1,381/1,381 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 1381
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.5 MB
- **Model Size**: 100.7 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 1381 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('dpo_all-MiniLM-L6-v2_gated_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ncgc0incendiary/truthDPO-statichh-pythia-1.4b-dpo-bf16
|
ncgc0incendiary
| 2025-09-18T14:28:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:ncgc/statichh-pythia-1.4b-sft-bf16",
"base_model:finetune:ncgc/statichh-pythia-1.4b-sft-bf16",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T14:17:09Z |
---
base_model: ncgc/statichh-pythia-1.4b-sft-bf16
library_name: transformers
model_name: truthDPO-statichh-pythia-1.4b-dpo-bf16
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for truthDPO-statichh-pythia-1.4b-dpo-bf16
This model is a fine-tuned version of [ncgc/statichh-pythia-1.4b-sft-bf16](https://huggingface.co/ncgc/statichh-pythia-1.4b-sft-bf16).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ncgc0incendiary/truthDPO-statichh-pythia-1.4b-dpo-bf16", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/2this0username0isnt2allowed-indian-institute-of-science/huggingface/runs/82poacch)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758205611
|
schooncestiaa
| 2025-09-18T14:28:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T14:27:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
david4096/disdriv-all-MiniLM-L6-v2_gated_e128
|
david4096
| 2025-09-18T14:26:54Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:26:51Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- small-ontology
---
# disdriv_all-MiniLM-L6-v2_gated_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: disdriv.owl
- **Domain**: general
- **Ontology Concepts**: 18
- **Concept Alignment**: 18/18 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 18
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.0 MB
- **Model Size**: 87.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 18 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('disdriv_all-MiniLM-L6-v2_gated_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
Aryantomar/Qwen2.5-0.5B_reasoning
|
Aryantomar
| 2025-09-18T14:26:13Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-06T07:47:53Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Aryantomar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
david4096/cteno-all-MiniLM-L6-v2_gated_e512
|
david4096
| 2025-09-18T14:24:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:24:30Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- small-ontology
---
# cteno_all-MiniLM-L6-v2_gated_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cteno.owl
- **Domain**: general
- **Ontology Concepts**: 172
- **Concept Alignment**: 172/172 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 172
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.3 MB
- **Model Size**: 89.3 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 172 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cteno_all-MiniLM-L6-v2_gated_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/cro-all-MiniLM-L6-v2_gated_e256
|
david4096
| 2025-09-18T14:23:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:23:28Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- small-ontology
---
# cro_all-MiniLM-L6-v2_gated_e256
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cro.owl
- **Domain**: general
- **Ontology Concepts**: 105
- **Concept Alignment**: 105/105 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 105
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.1 MB
- **Model Size**: 88.7 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 105 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cro_all-MiniLM-L6-v2_gated_e256')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/cro-all-MiniLM-L6-v2_attention_e512
|
david4096
| 2025-09-18T14:23:16Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:23:12Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- small-ontology
---
# cro_all-MiniLM-L6-v2_attention_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cro.owl
- **Domain**: general
- **Ontology Concepts**: 105
- **Concept Alignment**: 105/105 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 105
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.1 MB
- **Model Size**: 92.0 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 105 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cro_all-MiniLM-L6-v2_attention_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/cro-all-MiniLM-L6-v2_attention_e256
|
david4096
| 2025-09-18T14:23:06Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:23:03Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- small-ontology
---
# cro_all-MiniLM-L6-v2_attention_e256
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cro.owl
- **Domain**: general
- **Ontology Concepts**: 105
- **Concept Alignment**: 105/105 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 105
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.1 MB
- **Model Size**: 92.0 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 105 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cro_all-MiniLM-L6-v2_attention_e256')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/cro-all-MiniLM-L6-v2_attention_e128
|
david4096
| 2025-09-18T14:22:59Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:22:56Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- small-ontology
---
# cro_all-MiniLM-L6-v2_attention_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cro.owl
- **Domain**: general
- **Ontology Concepts**: 105
- **Concept Alignment**: 105/105 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 105
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.1 MB
- **Model Size**: 92.0 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 105 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cro_all-MiniLM-L6-v2_attention_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/clao-all-MiniLM-L6-v2_gated_e256
|
david4096
| 2025-09-18T14:21:33Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:21:29Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- medium-ontology
---
# clao_all-MiniLM-L6-v2_gated_e256
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: clao.owl
- **Domain**: general
- **Ontology Concepts**: 1,516
- **Concept Alignment**: 1,516/1,516 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 1516
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.7 MB
- **Model Size**: 102.0 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 1516 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('clao_all-MiniLM-L6-v2_gated_e256')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
grace-v/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
|
grace-v
| 2025-09-18T14:21:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T14:21:21Z |
---
license: apache-2.0
---
|
c-ho/17092025_modernbert_large_linsearch_only_abstract
|
c-ho
| 2025-09-18T14:15:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-18T04:52:21Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 17092025_modernbert_large_linsearch_only_abstract
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 17092025_modernbert_large_linsearch_only_abstract
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2133
- Accuracy: 0.5926
- F1 Macro: 0.5523
- Precision Macro: 0.5713
- Recall Macro: 0.5447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|
| 7.473 | 1.0 | 1757 | 1.2589 | 0.5601 | 0.4976 | 0.5604 | 0.4893 |
| 4.6095 | 2.0 | 3514 | 1.1778 | 0.5822 | 0.5388 | 0.5551 | 0.5353 |
| 3.9963 | 3.0 | 5271 | 1.1347 | 0.5921 | 0.5467 | 0.5699 | 0.5405 |
| 3.46 | 4.0 | 7028 | 1.1451 | 0.5928 | 0.5509 | 0.5658 | 0.5472 |
| 2.8519 | 4.9976 | 8780 | 1.2133 | 0.5926 | 0.5523 | 0.5713 | 0.5447 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
david4096/cl-all-MiniLM-L6-v2_gated_e128
|
david4096
| 2025-09-18T14:11:59Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"large-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:11:45Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- large-ontology
---
# cl_all-MiniLM-L6-v2_gated_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cl.owl
- **Domain**: general
- **Ontology Concepts**: 16,667
- **Concept Alignment**: 16,667/16,667 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 16667
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 53.4 MB
- **Model Size**: 244.6 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 16667 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cl_all-MiniLM-L6-v2_gated_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/cl-all-MiniLM-L6-v2_attention_e512
|
david4096
| 2025-09-18T14:08:23Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"large-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:08:09Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- large-ontology
---
# cl_all-MiniLM-L6-v2_attention_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cl.owl
- **Domain**: general
- **Ontology Concepts**: 16,667
- **Concept Alignment**: 16,667/16,667 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 16667
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 53.4 MB
- **Model Size**: 247.9 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 16667 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cl_all-MiniLM-L6-v2_attention_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
mradermacher/gpt-oss-20b-plan-i1-GGUF
|
mradermacher
| 2025-09-18T14:07:15Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gpt_oss",
"en",
"dataset:EpistemeAI/plan-reason-deep-reasoning",
"base_model:EpistemeAI/gpt-oss-20b-plan",
"base_model:quantized:EpistemeAI/gpt-oss-20b-plan",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-18T12:13:58Z |
---
base_model: EpistemeAI/gpt-oss-20b-plan
datasets:
- EpistemeAI/plan-reason-deep-reasoning
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/EpistemeAI/gpt-oss-20b-plan
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gpt-oss-20b-plan-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/gpt-oss-20b-plan-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ1_M.gguf) | i1-IQ1_M | 12.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ1_S.gguf) | i1-IQ1_S | 12.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ2_XS.gguf) | i1-IQ2_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ2_M.gguf) | i1-IQ2_M | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ2_S.gguf) | i1-IQ2_S | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ3_S.gguf) | i1-IQ3_S | 12.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q2_K.gguf) | i1-Q2_K | 12.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q2_K_S.gguf) | i1-Q2_K_S | 12.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q4_0.gguf) | i1-Q4_0 | 12.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-IQ3_M.gguf) | i1-IQ3_M | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q4_1.gguf) | i1-Q4_1 | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-plan-i1-GGUF/resolve/main/gpt-oss-20b-plan.i1-Q6_K.gguf) | i1-Q6_K | 22.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
david4096/cl-all-MiniLM-L6-v2_attention_e128
|
david4096
| 2025-09-18T14:00:58Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"large-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:00:43Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- large-ontology
---
# cl_all-MiniLM-L6-v2_attention_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cl.owl
- **Domain**: general
- **Ontology Concepts**: 16,667
- **Concept Alignment**: 16,667/16,667 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 16667
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 53.4 MB
- **Model Size**: 248.6 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 16667 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cl_all-MiniLM-L6-v2_attention_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
mradermacher/Mithril-LLaMa-70B-GGUF
|
mradermacher
| 2025-09-18T13:57:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksTesting/Mithril-LLaMa-70B",
"base_model:quantized:TareksTesting/Mithril-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T09:28:18Z |
---
base_model: TareksTesting/Mithril-LLaMa-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/TareksTesting/Mithril-LLaMa-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mithril-LLaMa-70B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mithril-LLaMa-70B-GGUF/resolve/main/Mithril-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
EpistemeAI/gpt-oss-20b-plan
|
EpistemeAI
| 2025-09-18T13:49:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"dataset:EpistemeAI/plan-reason-deep-reasoning",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-09-18T06:22:13Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
datasets:
- EpistemeAI/plan-reason-deep-reasoning
---
## Model card
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
### Deep reasoning gpt oss 20b ###
This is experimetal fine tuned **gpt-oss-20b** with EpistemeAI/plan deep reasoning. It provides deeper reasoning by planning, thinking/reasoning, and double check the answer and also measure confidence.
This model is inspired by Nathan Lambert's talk "Traits of Next Generation Reasoning Models".
It introduces a structured multi-phase reasoning cycle for large language models (LLMs).
The model extends beyond simple question-answer pairs by adding explicit reasoning phases:
- **Planning** – The model outlines a step-by-step plan before attempting a solution.
- **Answering** – The model provides its initial solution.
- **Double-Checking** – The model revisits its answer, verifying correctness and coherence.
- **Confidence** – The model assigns a confidence score or justification for its final response.
This structure encourages models to reason more transparently, self-correct, and calibrate their confidence.
Both models were trained on OpenAI's [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "EpistemeAI/gpt-oss-20b-plan"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=16000,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path EpistemeAI/gpt-oss-20b-plan
```
# Uploaded finetuned model
- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kumo2023/amelia
|
Kumo2023
| 2025-09-18T13:46:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-18T12:41:30Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Amelia
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Kumo2023/amelia/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Kumo2023/amelia', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Kumo2023/amelia/discussions) to add images that show off what you’ve made with this LoRA.
|
Papaperez/blockassist
|
Papaperez
| 2025-09-18T13:45:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling rangy worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T10:39:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling rangy worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tue-mps/coco_panoptic_eomt_large_640_dinov3
|
tue-mps
| 2025-09-18T13:43:23Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision",
"image-segmentation",
"arxiv:2503.19108",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2025-09-15T08:47:10Z |
---
library_name: transformers
license: mit
tags:
- vision
- image-segmentation
- pytorch
---
# EoMT
[](https://pytorch.org/)
**EoMT (Encoder-only Mask Transformer)** is a Vision Transformer (ViT) architecture designed for high-quality and efficient image segmentation. It was introduced in the CVPR 2025 highlight paper:
**[Your ViT is Secretly an Image Segmentation Model](https://www.tue-mps.org/eomt)**
by Tommie Kerssies, Niccolò Cavagnero, Alexander Hermans, Narges Norouzi, Giuseppe Averta, Bastian Leibe, Gijs Dubbelman, and Daan de Geus.
> **Key Insight**: Given sufficient scale and pretraining, a plain ViT along with additional few params can perform segmentation without the need for task-specific decoders or pixel fusion modules. The same model backbone supports semantic, instance, and panoptic segmentation with different post-processing 🤗
The original implementation can be found in this [repository](https://github.com/tue-mps/eomt).
The HuggingFace model page is available at this [link](https://huggingface.co/papers/2503.19108).
---
## Citation
If you find our work useful, please consider citing us as:
```bibtex
@inproceedings{kerssies2025eomt,
author = {Kerssies, Tommie and Cavagnero, Niccolò and Hermans, Alexander and Norouzi, Narges and Averta, Giuseppe and Leibe, Bastian and Dubbelman, Gijs and de Geus, Daan},
title = {Your ViT is Secretly an Image Segmentation Model},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2025},
}
```
|
TsienDragon/flux-kontext-face-segmentation
|
TsienDragon
| 2025-09-18T13:43:17Z | 0 | 0 | null |
[
"image2image",
"faceseg",
"en",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Kontext-dev",
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T08:29:54Z |
---
license: apache-2.0
language:
- en
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
tags:
- image2image
- faceseg
---
# Qwen-Image-Lora-Faceseg
<Gallery />
## Model description
This is a LoRA fine-tuned face segmentation model based on Flux-Kontext architecture,
specifically designed to transform facial images into precise segmentation masks.
The model leverages the powerful multimodal capabilities of Flux-Kontext and enhances it through Parameter-Efficient Fine-Tuning (PEFT) using LoRA (Low-Rank Adaptation) technique.
## Model Architecture
- Base Model: Flux-Kontext-Dev
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Task: Image-to-Image translation (Face → Segmentation Mask)
- Input: RGB facial images
- Output: Binary/grayscale segmentation masks highlighting facial regions
## Training Configuration
- Dataset: 20 carefully curated face segmentation samples
- Training Steps: 900-1000 steps
- Prompt: "change the image from the face to the face segmentation mask"
- Precision Options:
- BF16 precision for high-quality results
- FP4 quantization for memory-efficient deployment
## Key Features
1. High Precision Segmentation: Accurately identifies and segments facial boundaries with fine detail preservation
2. Memory Efficient: FP4 quantized version maintains competitive quality while significantly reducing memory footprint
3. Fast Inference: Optimized for real-time applications with 20 inference steps
4. Robust Performance: Handles various lighting conditions and facial orientations
5. Parameter Efficient: Only trains LoRA adapters (~18M parameters) while keeping base model frozen
## Technical Specifications
- Inference Steps: 20
- CFG Scale: 2.5
- Input Resolution: Configurable (typically 512x512)
- Model Size: Base model + ~18M LoRA parameters
- Memory Usage:
- BF16 version: Higher memory, best quality
- FP4 version: 75% memory reduction, competitive quality
## Use Cases
- Identity Verification: KYC (Know Your Customer) applications
- Privacy Protection: Face anonymization while preserving facial structure
- Medical Applications: Facial analysis and dermatological assessments
- AR/VR Applications: Real-time face tracking and segmentation
- Content Creation: Automated face masking for video editing
## Performance Highlights
- Accuracy: Significantly improved boundary detection compared to base model
- Detail Preservation: Maintains fine facial features in segmentation masks
- Consistency: Stable segmentation quality across different input conditions
- Efficiency: FP4 quantization achieves 4x memory savings with minimal quality loss
## Deployment Options
- High-Quality Mode: BF16 precision for maximum accuracy
- Efficient Mode: FP4 quantization for resource-constrained environments
- Real-time Applications: Optimized inference pipeline for low-latency requirements
This model represents a practical solution for face segmentation tasks, offering an excellent balance between accuracy, efficiency, and deployability across various hardware configurations
## Example:
Control Images

Edited Image with Qwen-Image-Edit by promot
`change the face to face segmentation mask`

After Lora Finetune with same prompt

## Code
Lora Finetune of Qwen-Image-Edit Code here: [https://github.com/tsiendragon/qwen-image-finetune](https://github.com/tsiendragon/qwen-image-finetune)
## Download model
[Download](/flux-kontext-face-segmentation/tree/main) them in the Files & versions tab.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.