modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-18 00:38:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 429
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-18 00:35:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
PrunaAI/scb10x-llama-3-typhoon-v1.5-8b-vision-preview-HQQ-4bit-smashed | PrunaAI | "2024-08-19T10:43:51Z" | 6 | 0 | null | [
"bunny-llama",
"pruna-ai",
"custom_code",
"base_model:scb10x/llama-3-typhoon-v1.5-8b-vision-preview",
"base_model:finetune:scb10x/llama-3-typhoon-v1.5-8b-vision-preview",
"region:us"
] | null | "2024-08-19T10:41:09Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: scb10x/llama-3-typhoon-v1.5-8b-vision-preview
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo scb10x/llama-3-typhoon-v1.5-8b-vision-preview installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/scb10x-llama-3-typhoon-v1.5-8b-vision-preview-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/scb10x-llama-3-typhoon-v1.5-8b-vision-preview-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("scb10x/llama-3-typhoon-v1.5-8b-vision-preview")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model scb10x/llama-3-typhoon-v1.5-8b-vision-preview before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
MaziyarPanahi/YamshadowInex12_YamExperiment28 | MaziyarPanahi | "2024-04-08T09:24:38Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/YamExperiment28-7B",
"base_model:merge:automerger/YamExperiment28-7B",
"base_model:automerger/YamshadowInex12-7B",
"base_model:merge:automerger/YamshadowInex12-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-08T09:13:24Z" | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: YamshadowInex12_YamExperiment28
base_model:
- automerger/YamshadowInex12-7B
- automerger/YamExperiment28-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# YamshadowInex12_YamExperiment28
YamshadowInex12_YamExperiment28 is a merge of the following models:
* [automerger/YamshadowInex12-7B](https://huggingface.co/automerger/YamshadowInex12-7B)
* [automerger/YamExperiment28-7B](https://huggingface.co/automerger/YamExperiment28-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/YamshadowInex12_YamExperiment28"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
BishanSingh246/mDeBERTa-v3-base-mnli-xnli-finetune_v1 | BishanSingh246 | "2024-04-01T17:29:51Z" | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:MoritzLaurer/mDeBERTa-v3-base-mnli-xnli",
"base_model:finetune:MoritzLaurer/mDeBERTa-v3-base-mnli-xnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-01T17:16:23Z" | ---
license: mit
base_model: MoritzLaurer/mDeBERTa-v3-base-mnli-xnli
tags:
- generated_from_trainer
model-index:
- name: mDeBERTa-v3-base-mnli-xnli-finetune_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mDeBERTa-v3-base-mnli-xnli-finetune_v1
This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ignaaaa10/SpaceInvaders | ignaaaa10 | "2023-11-06T10:04:49Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-06T10:02:17Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 594.00 +/- 127.43
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ignaaaa10 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ignaaaa10 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ignaaaa10
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nonemonehpark/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_M-GGUF | nonemonehpark | "2025-02-15T09:38:22Z" | 0 | 0 | null | [
"gguf",
"reasoning",
"llama-cpp",
"gguf-my-repo",
"am",
"ar",
"bn",
"zh",
"cs",
"nl",
"en",
"fr",
"de",
"el",
"ha",
"he",
"hi",
"id",
"it",
"ja",
"jv",
"km",
"ko",
"lo",
"ms",
"mr",
"fa",
"pl",
"pt",
"ro",
"ru",
"es",
"sw",
"sv",
"tl",
"ta",
"te",
"th",
"tr",
"uk",
"ur",
"vi",
"dataset:lightblue/reasoning-multilingual-R1-Llama-70B-train",
"base_model:lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual",
"base_model:quantized:lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-15T09:37:59Z" | ---
language:
- am
- ar
- bn
- zh
- cs
- nl
- en
- fr
- de
- el
- ha
- he
- hi
- id
- it
- ja
- jv
- km
- ko
- lo
- ms
- mr
- fa
- pl
- pt
- ro
- ru
- es
- sw
- sv
- tl
- ta
- te
- th
- tr
- uk
- ur
- vi
license: apache-2.0
datasets:
- lightblue/reasoning-multilingual-R1-Llama-70B-train
tags:
- reasoning
- llama-cpp
- gguf-my-repo
base_model: lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual
---
# nonemonehpark/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_M-GGUF
This model was converted to GGUF format from [`lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual`](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nonemonehpark/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-7b-multilingual-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nonemonehpark/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-7b-multilingual-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nonemonehpark/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-7b-multilingual-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nonemonehpark/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-7b-multilingual-q4_k_m.gguf -c 2048
```
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k17_task1_organization | MayBashendy | "2025-01-18T07:28:58Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-17T21:27:00Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k17_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k17_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4761
- Qwk: 0.4275
- Mse: 1.4761
- Rmse: 1.2149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.025 | 2 | 6.8946 | 0.0116 | 6.8946 | 2.6258 |
| No log | 0.05 | 4 | 4.4969 | 0.0591 | 4.4969 | 2.1206 |
| No log | 0.075 | 6 | 3.1263 | 0.0833 | 3.1263 | 1.7681 |
| No log | 0.1 | 8 | 2.4853 | 0.0526 | 2.4853 | 1.5765 |
| No log | 0.125 | 10 | 2.2251 | 0.0141 | 2.2251 | 1.4917 |
| No log | 0.15 | 12 | 1.9939 | 0.1138 | 1.9939 | 1.4121 |
| No log | 0.175 | 14 | 1.8957 | 0.0702 | 1.8957 | 1.3769 |
| No log | 0.2 | 16 | 1.7114 | 0.1143 | 1.7114 | 1.3082 |
| No log | 0.225 | 18 | 1.7230 | 0.0917 | 1.7230 | 1.3126 |
| No log | 0.25 | 20 | 2.2349 | 0.1667 | 2.2349 | 1.4950 |
| No log | 0.275 | 22 | 2.1058 | 0.3088 | 2.1058 | 1.4511 |
| No log | 0.3 | 24 | 2.0060 | 0.2595 | 2.0060 | 1.4163 |
| No log | 0.325 | 26 | 2.1552 | 0.1408 | 2.1552 | 1.4681 |
| No log | 0.35 | 28 | 2.0855 | 0.1702 | 2.0855 | 1.4441 |
| No log | 0.375 | 30 | 1.9111 | 0.2812 | 1.9111 | 1.3824 |
| No log | 0.4 | 32 | 1.7574 | 0.2069 | 1.7574 | 1.3257 |
| No log | 0.425 | 34 | 1.8407 | 0.1636 | 1.8407 | 1.3567 |
| No log | 0.45 | 36 | 1.9746 | 0.1709 | 1.9746 | 1.4052 |
| No log | 0.475 | 38 | 1.9800 | 0.1849 | 1.9800 | 1.4071 |
| No log | 0.5 | 40 | 1.7305 | 0.2241 | 1.7305 | 1.3155 |
| No log | 0.525 | 42 | 1.4620 | 0.2909 | 1.4620 | 1.2091 |
| No log | 0.55 | 44 | 1.4670 | 0.3063 | 1.4670 | 1.2112 |
| No log | 0.575 | 46 | 1.6646 | 0.2034 | 1.6646 | 1.2902 |
| No log | 0.6 | 48 | 1.7107 | 0.3360 | 1.7107 | 1.3079 |
| No log | 0.625 | 50 | 1.6601 | 0.3465 | 1.6601 | 1.2885 |
| No log | 0.65 | 52 | 1.5052 | 0.4603 | 1.5052 | 1.2268 |
| No log | 0.675 | 54 | 1.3801 | 0.4167 | 1.3801 | 1.1748 |
| No log | 0.7 | 56 | 1.3839 | 0.4098 | 1.3839 | 1.1764 |
| No log | 0.725 | 58 | 1.3361 | 0.3826 | 1.3361 | 1.1559 |
| No log | 0.75 | 60 | 1.5769 | 0.2727 | 1.5769 | 1.2557 |
| No log | 0.775 | 62 | 1.6200 | 0.2545 | 1.6200 | 1.2728 |
| No log | 0.8 | 64 | 1.3562 | 0.3273 | 1.3562 | 1.1646 |
| No log | 0.825 | 66 | 1.2913 | 0.3860 | 1.2913 | 1.1364 |
| No log | 0.85 | 68 | 1.2721 | 0.4522 | 1.2721 | 1.1279 |
| No log | 0.875 | 70 | 1.2615 | 0.3571 | 1.2615 | 1.1231 |
| No log | 0.9 | 72 | 1.2761 | 0.3423 | 1.2761 | 1.1297 |
| No log | 0.925 | 74 | 1.2997 | 0.3273 | 1.2997 | 1.1401 |
| No log | 0.95 | 76 | 1.1775 | 0.4211 | 1.1775 | 1.0851 |
| No log | 0.975 | 78 | 1.1758 | 0.5210 | 1.1758 | 1.0843 |
| No log | 1.0 | 80 | 1.1852 | 0.5000 | 1.1852 | 1.0887 |
| No log | 1.025 | 82 | 1.1444 | 0.4561 | 1.1444 | 1.0698 |
| No log | 1.05 | 84 | 1.3019 | 0.3393 | 1.3019 | 1.1410 |
| No log | 1.075 | 86 | 1.6628 | 0.25 | 1.6628 | 1.2895 |
| No log | 1.1 | 88 | 1.6909 | 0.2832 | 1.6909 | 1.3003 |
| No log | 1.125 | 90 | 1.4031 | 0.3393 | 1.4031 | 1.1845 |
| No log | 1.15 | 92 | 1.3468 | 0.4035 | 1.3468 | 1.1605 |
| No log | 1.175 | 94 | 1.4256 | 0.3894 | 1.4256 | 1.1940 |
| No log | 1.2 | 96 | 1.8347 | 0.1754 | 1.8347 | 1.3545 |
| No log | 1.225 | 98 | 1.6904 | 0.3036 | 1.6904 | 1.3001 |
| No log | 1.25 | 100 | 1.2670 | 0.4348 | 1.2670 | 1.1256 |
| No log | 1.275 | 102 | 1.1140 | 0.4957 | 1.1140 | 1.0554 |
| No log | 1.3 | 104 | 1.3129 | 0.5246 | 1.3129 | 1.1458 |
| No log | 1.325 | 106 | 1.2838 | 0.5124 | 1.2838 | 1.1330 |
| No log | 1.35 | 108 | 1.0595 | 0.4348 | 1.0595 | 1.0293 |
| No log | 1.375 | 110 | 1.4117 | 0.4062 | 1.4117 | 1.1881 |
| No log | 1.4 | 112 | 1.8233 | 0.1290 | 1.8233 | 1.3503 |
| No log | 1.425 | 114 | 1.8407 | 0.1197 | 1.8407 | 1.3567 |
| No log | 1.45 | 116 | 1.4490 | 0.3360 | 1.4490 | 1.2037 |
| No log | 1.475 | 118 | 1.1065 | 0.5484 | 1.1065 | 1.0519 |
| No log | 1.5 | 120 | 0.9817 | 0.5968 | 0.9817 | 0.9908 |
| No log | 1.525 | 122 | 1.0322 | 0.5366 | 1.0322 | 1.0160 |
| No log | 1.55 | 124 | 1.0789 | 0.5124 | 1.0789 | 1.0387 |
| No log | 1.575 | 126 | 1.1578 | 0.4538 | 1.1578 | 1.0760 |
| No log | 1.6 | 128 | 1.2341 | 0.4426 | 1.2341 | 1.1109 |
| No log | 1.625 | 130 | 1.3138 | 0.4409 | 1.3138 | 1.1462 |
| No log | 1.65 | 132 | 1.3423 | 0.4545 | 1.3423 | 1.1586 |
| No log | 1.675 | 134 | 1.2634 | 0.4426 | 1.2634 | 1.1240 |
| No log | 1.7 | 136 | 1.2442 | 0.4310 | 1.2442 | 1.1154 |
| No log | 1.725 | 138 | 1.2415 | 0.4874 | 1.2415 | 1.1142 |
| No log | 1.75 | 140 | 1.2365 | 0.4667 | 1.2365 | 1.1120 |
| No log | 1.775 | 142 | 1.3829 | 0.4848 | 1.3829 | 1.1759 |
| No log | 1.8 | 144 | 1.6137 | 0.3597 | 1.6137 | 1.2703 |
| No log | 1.825 | 146 | 1.6555 | 0.3286 | 1.6555 | 1.2867 |
| No log | 1.85 | 148 | 1.5273 | 0.3852 | 1.5273 | 1.2358 |
| No log | 1.875 | 150 | 1.3507 | 0.4355 | 1.3507 | 1.1622 |
| No log | 1.9 | 152 | 1.2613 | 0.4202 | 1.2613 | 1.1231 |
| No log | 1.925 | 154 | 1.2658 | 0.4034 | 1.2658 | 1.1251 |
| No log | 1.95 | 156 | 1.3653 | 0.4677 | 1.3653 | 1.1684 |
| No log | 1.975 | 158 | 1.3493 | 0.4882 | 1.3493 | 1.1616 |
| No log | 2.0 | 160 | 1.2738 | 0.4806 | 1.2738 | 1.1286 |
| No log | 2.025 | 162 | 1.4033 | 0.4148 | 1.4033 | 1.1846 |
| No log | 2.05 | 164 | 1.6157 | 0.3857 | 1.6157 | 1.2711 |
| No log | 2.075 | 166 | 1.4870 | 0.4058 | 1.4870 | 1.2194 |
| No log | 2.1 | 168 | 1.2101 | 0.5116 | 1.2101 | 1.1000 |
| No log | 2.125 | 170 | 1.1480 | 0.5354 | 1.1480 | 1.0715 |
| No log | 2.15 | 172 | 1.2062 | 0.5426 | 1.2062 | 1.0983 |
| No log | 2.175 | 174 | 1.2991 | 0.4806 | 1.2991 | 1.1398 |
| No log | 2.2 | 176 | 1.2999 | 0.4844 | 1.2999 | 1.1401 |
| No log | 2.225 | 178 | 1.2271 | 0.5039 | 1.2271 | 1.1077 |
| No log | 2.25 | 180 | 1.1872 | 0.3860 | 1.1872 | 1.0896 |
| No log | 2.275 | 182 | 1.1974 | 0.4202 | 1.1974 | 1.0942 |
| No log | 2.3 | 184 | 1.2330 | 0.5039 | 1.2330 | 1.1104 |
| No log | 2.325 | 186 | 1.3067 | 0.4615 | 1.3067 | 1.1431 |
| No log | 2.35 | 188 | 1.5785 | 0.3714 | 1.5785 | 1.2564 |
| No log | 2.375 | 190 | 1.7655 | 0.3380 | 1.7655 | 1.3287 |
| No log | 2.4 | 192 | 1.6283 | 0.3597 | 1.6283 | 1.2760 |
| No log | 2.425 | 194 | 1.3955 | 0.3852 | 1.3955 | 1.1813 |
| No log | 2.45 | 196 | 1.2357 | 0.4603 | 1.2357 | 1.1116 |
| No log | 2.475 | 198 | 1.2323 | 0.4603 | 1.2323 | 1.1101 |
| No log | 2.5 | 200 | 1.4106 | 0.4296 | 1.4106 | 1.1877 |
| No log | 2.525 | 202 | 1.7394 | 0.3497 | 1.7394 | 1.3189 |
| No log | 2.55 | 204 | 1.6529 | 0.3521 | 1.6529 | 1.2856 |
| No log | 2.575 | 206 | 1.3577 | 0.4361 | 1.3577 | 1.1652 |
| No log | 2.6 | 208 | 1.0786 | 0.5484 | 1.0786 | 1.0385 |
| No log | 2.625 | 210 | 1.0470 | 0.5938 | 1.0470 | 1.0232 |
| No log | 2.65 | 212 | 1.0547 | 0.6308 | 1.0547 | 1.0270 |
| No log | 2.675 | 214 | 1.2644 | 0.4394 | 1.2644 | 1.1244 |
| No log | 2.7 | 216 | 1.4849 | 0.3504 | 1.4849 | 1.2185 |
| No log | 2.725 | 218 | 1.5542 | 0.3504 | 1.5542 | 1.2467 |
| No log | 2.75 | 220 | 1.4352 | 0.4615 | 1.4352 | 1.1980 |
| No log | 2.775 | 222 | 1.3328 | 0.4754 | 1.3328 | 1.1545 |
| No log | 2.8 | 224 | 1.2484 | 0.3860 | 1.2484 | 1.1173 |
| No log | 2.825 | 226 | 1.2583 | 0.3214 | 1.2583 | 1.1218 |
| No log | 2.85 | 228 | 1.2893 | 0.4483 | 1.2893 | 1.1355 |
| No log | 2.875 | 230 | 1.4055 | 0.4640 | 1.4055 | 1.1855 |
| No log | 2.9 | 232 | 1.5630 | 0.3676 | 1.5630 | 1.2502 |
| No log | 2.925 | 234 | 1.6011 | 0.3309 | 1.6011 | 1.2653 |
| No log | 2.95 | 236 | 1.4764 | 0.4030 | 1.4764 | 1.2151 |
| No log | 2.975 | 238 | 1.3155 | 0.5366 | 1.3155 | 1.1470 |
| No log | 3.0 | 240 | 1.2459 | 0.5366 | 1.2459 | 1.1162 |
| No log | 3.025 | 242 | 1.3064 | 0.5366 | 1.3064 | 1.1430 |
| No log | 3.05 | 244 | 1.4749 | 0.4308 | 1.4749 | 1.2145 |
| No log | 3.075 | 246 | 1.5650 | 0.3852 | 1.5650 | 1.2510 |
| No log | 3.1 | 248 | 1.4984 | 0.4060 | 1.4984 | 1.2241 |
| No log | 3.125 | 250 | 1.3399 | 0.512 | 1.3399 | 1.1576 |
| No log | 3.15 | 252 | 1.2543 | 0.5203 | 1.2543 | 1.1200 |
| No log | 3.175 | 254 | 1.2900 | 0.4921 | 1.2900 | 1.1358 |
| No log | 3.2 | 256 | 1.4427 | 0.4462 | 1.4427 | 1.2011 |
| No log | 3.225 | 258 | 1.5756 | 0.3824 | 1.5756 | 1.2552 |
| No log | 3.25 | 260 | 1.6242 | 0.3824 | 1.6242 | 1.2744 |
| No log | 3.275 | 262 | 1.5372 | 0.3609 | 1.5372 | 1.2399 |
| No log | 3.3 | 264 | 1.3995 | 0.4462 | 1.3995 | 1.1830 |
| No log | 3.325 | 266 | 1.4174 | 0.4122 | 1.4174 | 1.1906 |
| No log | 3.35 | 268 | 1.6244 | 0.3453 | 1.6244 | 1.2745 |
| No log | 3.375 | 270 | 1.8935 | 0.2817 | 1.8935 | 1.3760 |
| No log | 3.4 | 272 | 1.9807 | 0.2639 | 1.9807 | 1.4074 |
| No log | 3.425 | 274 | 1.8010 | 0.3121 | 1.8010 | 1.3420 |
| No log | 3.45 | 276 | 1.4647 | 0.3582 | 1.4647 | 1.2102 |
| No log | 3.475 | 278 | 1.2699 | 0.5156 | 1.2699 | 1.1269 |
| No log | 3.5 | 280 | 1.2353 | 0.5528 | 1.2353 | 1.1115 |
| No log | 3.525 | 282 | 1.2734 | 0.5041 | 1.2734 | 1.1284 |
| No log | 3.55 | 284 | 1.3894 | 0.496 | 1.3894 | 1.1787 |
| No log | 3.575 | 286 | 1.5241 | 0.4211 | 1.5241 | 1.2346 |
| No log | 3.6 | 288 | 1.6011 | 0.3676 | 1.6011 | 1.2653 |
| No log | 3.625 | 290 | 1.5420 | 0.4 | 1.5420 | 1.2418 |
| No log | 3.65 | 292 | 1.4518 | 0.4060 | 1.4518 | 1.2049 |
| No log | 3.675 | 294 | 1.2592 | 0.4769 | 1.2592 | 1.1221 |
| No log | 3.7 | 296 | 1.1363 | 0.5625 | 1.1363 | 1.0660 |
| No log | 3.725 | 298 | 1.1463 | 0.5625 | 1.1463 | 1.0707 |
| No log | 3.75 | 300 | 1.3052 | 0.4651 | 1.3052 | 1.1425 |
| No log | 3.775 | 302 | 1.4654 | 0.4328 | 1.4654 | 1.2105 |
| No log | 3.8 | 304 | 1.4796 | 0.4361 | 1.4796 | 1.2164 |
| No log | 3.825 | 306 | 1.3658 | 0.4651 | 1.3658 | 1.1687 |
| No log | 3.85 | 308 | 1.3051 | 0.4286 | 1.3051 | 1.1424 |
| No log | 3.875 | 310 | 1.3236 | 0.4531 | 1.3236 | 1.1505 |
| No log | 3.9 | 312 | 1.4594 | 0.4545 | 1.4594 | 1.2080 |
| No log | 3.925 | 314 | 1.6651 | 0.3623 | 1.6651 | 1.2904 |
| No log | 3.95 | 316 | 1.6988 | 0.3286 | 1.6988 | 1.3034 |
| No log | 3.975 | 318 | 1.5471 | 0.3704 | 1.5471 | 1.2438 |
| No log | 4.0 | 320 | 1.3765 | 0.4615 | 1.3765 | 1.1732 |
| No log | 4.025 | 322 | 1.3536 | 0.4651 | 1.3536 | 1.1634 |
| No log | 4.05 | 324 | 1.3368 | 0.4651 | 1.3368 | 1.1562 |
| No log | 4.075 | 326 | 1.3318 | 0.4320 | 1.3318 | 1.1540 |
| No log | 4.1 | 328 | 1.3187 | 0.4918 | 1.3187 | 1.1483 |
| No log | 4.125 | 330 | 1.3531 | 0.4918 | 1.3531 | 1.1632 |
| No log | 4.15 | 332 | 1.4439 | 0.4409 | 1.4439 | 1.2016 |
| No log | 4.175 | 334 | 1.4988 | 0.4769 | 1.4988 | 1.2243 |
| No log | 4.2 | 336 | 1.4228 | 0.4651 | 1.4228 | 1.1928 |
| No log | 4.225 | 338 | 1.2937 | 0.4444 | 1.2937 | 1.1374 |
| No log | 4.25 | 340 | 1.2749 | 0.4531 | 1.2749 | 1.1291 |
| No log | 4.275 | 342 | 1.2651 | 0.4651 | 1.2651 | 1.1247 |
| No log | 4.3 | 344 | 1.3030 | 0.4615 | 1.3030 | 1.1415 |
| No log | 4.325 | 346 | 1.3268 | 0.4427 | 1.3268 | 1.1518 |
| No log | 4.35 | 348 | 1.4751 | 0.4380 | 1.4751 | 1.2145 |
| No log | 4.375 | 350 | 1.4571 | 0.4255 | 1.4571 | 1.2071 |
| No log | 4.4 | 352 | 1.3554 | 0.4060 | 1.3554 | 1.1642 |
| No log | 4.425 | 354 | 1.1977 | 0.4769 | 1.1977 | 1.0944 |
| No log | 4.45 | 356 | 1.1889 | 0.5038 | 1.1889 | 1.0903 |
| No log | 4.475 | 358 | 1.2974 | 0.4662 | 1.2974 | 1.1390 |
| No log | 4.5 | 360 | 1.4203 | 0.4118 | 1.4203 | 1.1918 |
| No log | 4.525 | 362 | 1.6167 | 0.3521 | 1.6167 | 1.2715 |
| No log | 4.55 | 364 | 1.6603 | 0.3521 | 1.6603 | 1.2885 |
| No log | 4.575 | 366 | 1.6309 | 0.3478 | 1.6309 | 1.2771 |
| No log | 4.6 | 368 | 1.5431 | 0.3824 | 1.5431 | 1.2422 |
| No log | 4.625 | 370 | 1.4981 | 0.3759 | 1.4981 | 1.2240 |
| No log | 4.65 | 372 | 1.3730 | 0.4651 | 1.3730 | 1.1717 |
| No log | 4.675 | 374 | 1.2168 | 0.5238 | 1.2168 | 1.1031 |
| No log | 4.7 | 376 | 1.2083 | 0.496 | 1.2083 | 1.0992 |
| No log | 4.725 | 378 | 1.3364 | 0.4615 | 1.3364 | 1.1560 |
| No log | 4.75 | 380 | 1.5423 | 0.4148 | 1.5423 | 1.2419 |
| No log | 4.775 | 382 | 1.6300 | 0.3796 | 1.6300 | 1.2767 |
| No log | 4.8 | 384 | 1.5222 | 0.4478 | 1.5222 | 1.2338 |
| No log | 4.825 | 386 | 1.3186 | 0.4651 | 1.3186 | 1.1483 |
| No log | 4.85 | 388 | 1.1377 | 0.5161 | 1.1377 | 1.0666 |
| No log | 4.875 | 390 | 1.0646 | 0.528 | 1.0646 | 1.0318 |
| No log | 4.9 | 392 | 1.1237 | 0.4844 | 1.1237 | 1.0600 |
| No log | 4.925 | 394 | 1.3544 | 0.4511 | 1.3543 | 1.1638 |
| No log | 4.95 | 396 | 1.5425 | 0.3824 | 1.5425 | 1.2420 |
| No log | 4.975 | 398 | 1.4347 | 0.4478 | 1.4347 | 1.1978 |
| No log | 5.0 | 400 | 1.2707 | 0.4697 | 1.2707 | 1.1272 |
| No log | 5.025 | 402 | 1.1791 | 0.5116 | 1.1791 | 1.0859 |
| No log | 5.05 | 404 | 1.0859 | 0.5161 | 1.0859 | 1.0421 |
| No log | 5.075 | 406 | 1.0961 | 0.528 | 1.0961 | 1.0469 |
| No log | 5.1 | 408 | 1.1958 | 0.4697 | 1.1958 | 1.0935 |
| No log | 5.125 | 410 | 1.4718 | 0.3971 | 1.4718 | 1.2132 |
| No log | 5.15 | 412 | 1.8354 | 0.2857 | 1.8354 | 1.3548 |
| No log | 5.175 | 414 | 1.9227 | 0.2378 | 1.9227 | 1.3866 |
| No log | 5.2 | 416 | 1.7947 | 0.2837 | 1.7947 | 1.3397 |
| No log | 5.225 | 418 | 1.4317 | 0.4478 | 1.4317 | 1.1966 |
| No log | 5.25 | 420 | 1.1171 | 0.48 | 1.1171 | 1.0569 |
| No log | 5.275 | 422 | 1.0773 | 0.4959 | 1.0773 | 1.0379 |
| No log | 5.3 | 424 | 1.1246 | 0.4918 | 1.1246 | 1.0605 |
| No log | 5.325 | 426 | 1.2554 | 0.4882 | 1.2554 | 1.1205 |
| No log | 5.35 | 428 | 1.4243 | 0.4545 | 1.4243 | 1.1934 |
| No log | 5.375 | 430 | 1.4085 | 0.4545 | 1.4085 | 1.1868 |
| No log | 5.4 | 432 | 1.2768 | 0.4806 | 1.2768 | 1.1300 |
| No log | 5.425 | 434 | 1.2005 | 0.512 | 1.2005 | 1.0957 |
| No log | 5.45 | 436 | 1.2507 | 0.5039 | 1.2507 | 1.1184 |
| No log | 5.475 | 438 | 1.3992 | 0.4462 | 1.3992 | 1.1829 |
| No log | 5.5 | 440 | 1.5459 | 0.4179 | 1.5459 | 1.2434 |
| No log | 5.525 | 442 | 1.7602 | 0.3022 | 1.7602 | 1.3267 |
| No log | 5.55 | 444 | 1.8992 | 0.2553 | 1.8992 | 1.3781 |
| No log | 5.575 | 446 | 1.8395 | 0.2857 | 1.8395 | 1.3563 |
| No log | 5.6 | 448 | 1.6809 | 0.3066 | 1.6809 | 1.2965 |
| No log | 5.625 | 450 | 1.6171 | 0.3504 | 1.6171 | 1.2716 |
| No log | 5.65 | 452 | 1.6833 | 0.3066 | 1.6833 | 1.2974 |
| No log | 5.675 | 454 | 1.5986 | 0.3731 | 1.5986 | 1.2644 |
| No log | 5.7 | 456 | 1.3916 | 0.4154 | 1.3916 | 1.1797 |
| No log | 5.725 | 458 | 1.2947 | 0.4882 | 1.2947 | 1.1378 |
| No log | 5.75 | 460 | 1.2272 | 0.4878 | 1.2272 | 1.1078 |
| No log | 5.775 | 462 | 1.2329 | 0.5203 | 1.2329 | 1.1103 |
| No log | 5.8 | 464 | 1.3008 | 0.4677 | 1.3008 | 1.1405 |
| No log | 5.825 | 466 | 1.4504 | 0.4154 | 1.4504 | 1.2043 |
| No log | 5.85 | 468 | 1.6627 | 0.3433 | 1.6627 | 1.2895 |
| No log | 5.875 | 470 | 1.7431 | 0.3066 | 1.7431 | 1.3203 |
| No log | 5.9 | 472 | 1.7117 | 0.3066 | 1.7117 | 1.3083 |
| No log | 5.925 | 474 | 1.5806 | 0.3609 | 1.5806 | 1.2572 |
| No log | 5.95 | 476 | 1.4433 | 0.3910 | 1.4433 | 1.2014 |
| No log | 5.975 | 478 | 1.3391 | 0.4375 | 1.3391 | 1.1572 |
| No log | 6.0 | 480 | 1.3259 | 0.4186 | 1.3259 | 1.1515 |
| No log | 6.025 | 482 | 1.3445 | 0.4 | 1.3445 | 1.1595 |
| No log | 6.05 | 484 | 1.4461 | 0.3910 | 1.4461 | 1.2025 |
| No log | 6.075 | 486 | 1.5468 | 0.3609 | 1.5468 | 1.2437 |
| No log | 6.1 | 488 | 1.5084 | 0.3511 | 1.5084 | 1.2282 |
| No log | 6.125 | 490 | 1.3753 | 0.4762 | 1.3753 | 1.1727 |
| No log | 6.15 | 492 | 1.3631 | 0.496 | 1.3631 | 1.1675 |
| No log | 6.175 | 494 | 1.3674 | 0.496 | 1.3674 | 1.1694 |
| No log | 6.2 | 496 | 1.3920 | 0.496 | 1.3920 | 1.1798 |
| No log | 6.225 | 498 | 1.4096 | 0.4762 | 1.4096 | 1.1873 |
| 0.4085 | 6.25 | 500 | 1.4311 | 0.4724 | 1.4311 | 1.1963 |
| 0.4085 | 6.275 | 502 | 1.4991 | 0.4091 | 1.4991 | 1.2244 |
| 0.4085 | 6.3 | 504 | 1.6641 | 0.3134 | 1.6641 | 1.2900 |
| 0.4085 | 6.325 | 506 | 1.7147 | 0.2609 | 1.7147 | 1.3095 |
| 0.4085 | 6.35 | 508 | 1.6181 | 0.3582 | 1.6181 | 1.2720 |
| 0.4085 | 6.375 | 510 | 1.4761 | 0.4275 | 1.4761 | 1.2149 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/merlinite-7b-GGUF | mradermacher | "2025-02-08T08:04:58Z" | 31 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ibm-research/merlinite-7b",
"base_model:quantized:ibm-research/merlinite-7b",
"endpoints_compatible",
"region:us"
] | null | "2024-11-12T09:10:04Z" | ---
base_model: ibm-research/merlinite-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ibm-research/merlinite-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/merlinite-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/merlinite-7b-GGUF/resolve/main/merlinite-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ariffiq99/COPA_E_CARE_albert_base_finetuned | Ariffiq99 | "2024-06-23T02:56:04Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/e_care_albert_base_finetuned",
"base_model:finetune:Ariffiq99/e_care_albert_base_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:55:01Z" | ---
license: apache-2.0
base_model: Ariffiq99/e_care_albert_base_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_E_CARE_albert_base_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_E_CARE_albert_base_finetuned
This model is a fine-tuned version of [Ariffiq99/e_care_albert_base_finetuned](https://huggingface.co/Ariffiq99/e_care_albert_base_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8976
- F1: 0.732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 0.5300 | 0.74 |
| No log | 2.0 | 126 | 0.5038 | 0.76 |
| No log | 3.0 | 189 | 0.5805 | 0.75 |
| No log | 4.0 | 252 | 0.5694 | 0.7700 |
| No log | 5.0 | 315 | 0.6823 | 0.74 |
| No log | 6.0 | 378 | 0.7699 | 0.7420 |
| No log | 7.0 | 441 | 0.7680 | 0.754 |
| 0.2122 | 8.0 | 504 | 0.8489 | 0.738 |
| 0.2122 | 9.0 | 567 | 0.8899 | 0.7300 |
| 0.2122 | 10.0 | 630 | 0.8976 | 0.732 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
JPDuran/ppo-Huggy | JPDuran | "2023-10-20T06:13:50Z" | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-10-20T06:13:45Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JPDuran/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vishwa27/CN_BERT_Sci | vishwa27 | "2023-11-16T23:11:22Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-15T22:58:33Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CN_BERT_Sci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CN_BERT_Sci
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0088
- F1: {'f1': 0.9980007996801279}
- Accuracy: {'accuracy': 0.998}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------------:|:--------------------:|
| 0.3819 | 0.09 | 1000 | 0.3378 | {'f1': 0.7968977217644208} | {'accuracy': 0.7905} |
| 0.2709 | 0.18 | 2000 | 0.2719 | {'f1': 0.92162615255658} | {'accuracy': 0.9252} |
| 0.169 | 0.27 | 3000 | 0.0888 | {'f1': 0.9760964045831687} | {'accuracy': 0.9758} |
| 0.0963 | 0.36 | 4000 | 0.0350 | {'f1': 0.991297389216765} | {'accuracy': 0.9913} |
| 0.0499 | 0.44 | 5000 | 0.0260 | {'f1': 0.9937381969983102} | {'accuracy': 0.9937} |
| 0.0344 | 0.53 | 6000 | 0.0170 | {'f1': 0.9963048037551183} | {'accuracy': 0.9963} |
| 0.0307 | 0.62 | 7000 | 0.0213 | {'f1': 0.9957991598319663} | {'accuracy': 0.9958} |
| 0.036 | 0.71 | 8000 | 0.0105 | {'f1': 0.997700689793062} | {'accuracy': 0.9977} |
| 0.0209 | 0.8 | 9000 | 0.0106 | {'f1': 0.9981032245183188} | {'accuracy': 0.9981} |
| 0.0253 | 0.89 | 10000 | 0.0089 | {'f1': 0.9981024667931688} | {'accuracy': 0.9981} |
| 0.0231 | 0.98 | 11000 | 0.0088 | {'f1': 0.9980007996801279} | {'accuracy': 0.998} |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
huggingtweets/ben_r_hoffman | huggingtweets | "2021-05-21T20:18:55Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/ben_r_hoffman/1618455389168/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1365987365525815304/uxdWurnN_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Anarcho-Moses 🐍 🤖 AI Bot </div>
<div style="font-size: 15px">@ben_r_hoffman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ben_r_hoffman's tweets](https://twitter.com/ben_r_hoffman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 107 |
| Short tweets | 264 |
| Tweets kept | 2876 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vlvpdufz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ben_r_hoffman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2nf4hyti) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2nf4hyti/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ben_r_hoffman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AlessandroSpike/stock_finetuned_model | AlessandroSpike | "2025-02-18T13:50:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-18T13:49:52Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlessandroSpike
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso04/3d16a822-59f0-4e69-b5c2-ef642aaefab5 | lesso04 | "2025-01-19T21:13:36Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-19T20:30:47Z" | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3d16a822-59f0-4e69-b5c2-ef642aaefab5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: true
chat_template: llama3
datasets:
- data_files:
- b17c95c88fed8ad0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b17c95c88fed8ad0_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/3d16a822-59f0-4e69-b5c2-ef642aaefab5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/b17c95c88fed8ad0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 83663209-c8d7-4bdb-aacf-e51284f9a786
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 83663209-c8d7-4bdb-aacf-e51284f9a786
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3d16a822-59f0-4e69-b5c2-ef642aaefab5
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0012 | 15 | nan |
| 0.0 | 0.0016 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Membersuger/Comp2_NVIDIA_GS33 | Membersuger | "2025-03-20T09:35:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-20T06:36:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kid502/diffusion_portrait | kid502 | "2024-05-23T17:24:10Z" | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2024-05-23T17:22:46Z" | ---
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
This model is a diffusion model for unconditional image generation of portrait.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('kid502/diffusion_portrait')
image = pipeline().images[0]
image
```
|
mradermacher/ChatHercules-2.5-Mistral-7B-GGUF | mradermacher | "2024-11-12T19:53:09Z" | 21 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Locutusque/Hercules-2.5-Mistral-7B",
"openchat/openchat-3.5-0106",
"en",
"base_model:hydra-project/ChatHercules-2.5-Mistral-7B",
"base_model:quantized:hydra-project/ChatHercules-2.5-Mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-11T19:48:56Z" | ---
base_model: hydra-project/ChatHercules-2.5-Mistral-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Locutusque/Hercules-2.5-Mistral-7B
- openchat/openchat-3.5-0106
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/hydra-project/ChatHercules-2.5-Mistral-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ChatHercules-2.5-Mistral-7B-GGUF/resolve/main/ChatHercules-2.5-Mistral-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Joooorrit/12_hot | Joooorrit | "2025-04-14T17:08:30Z" | 396 | 0 | null | [
"safetensors",
"gpt_optimized",
"custom_code",
"region:us"
] | null | "2025-03-07T00:29:13Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf | RichardErkhov | "2024-05-17T09:01:12Z" | 47 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-16T22:14:04Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
autotrain-mixtral7x8b-math - GGUF
- Model creator: https://huggingface.co/abhishek/
- Original model: https://huggingface.co/abhishek/autotrain-mixtral7x8b-math/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [autotrain-mixtral7x8b-math.Q2_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q2_K.gguf) | Q2_K | 16.12GB |
| [autotrain-mixtral7x8b-math.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [autotrain-mixtral7x8b-math.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [autotrain-mixtral7x8b-math.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [autotrain-mixtral7x8b-math.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [autotrain-mixtral7x8b-math.Q3_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q3_K.gguf) | Q3_K | 21.0GB |
| [autotrain-mixtral7x8b-math.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [autotrain-mixtral7x8b-math.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [autotrain-mixtral7x8b-math.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [autotrain-mixtral7x8b-math.Q4_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_0.gguf) | Q4_0 | 24.63GB |
| [autotrain-mixtral7x8b-math.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [autotrain-mixtral7x8b-math.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [autotrain-mixtral7x8b-math.Q4_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_K.gguf) | Q4_K | 26.49GB |
| [autotrain-mixtral7x8b-math.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [autotrain-mixtral7x8b-math.Q4_1.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_1.gguf) | Q4_1 | 27.32GB |
| [autotrain-mixtral7x8b-math.Q5_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_0.gguf) | Q5_0 | 30.02GB |
| [autotrain-mixtral7x8b-math.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [autotrain-mixtral7x8b-math.Q5_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_K.gguf) | Q5_K | 30.95GB |
| [autotrain-mixtral7x8b-math.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [autotrain-mixtral7x8b-math.Q5_1.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_1.gguf) | Q5_1 | 32.71GB |
| [autotrain-mixtral7x8b-math.Q6_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q6_K.gguf) | Q6_K | 35.74GB |
| [autotrain-mixtral7x8b-math.Q8_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
HBDX/Seq-Struct-TransfoRNA | HBDX | "2024-06-20T12:44:27Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-18T13:05:08Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
license: gpl-3.0
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
## Steps to run model
- First install [transforna](https://github.com/gitHBDX/TransfoRNA/tree/master)
- Example code:
```
from transforna import GeneEmbeddModel,RnaTokenizer
import torch
model_name = 'Seq-Struct'
model_path = f"HBDX/{model_name}-TransfoRNA"
#load model and tokenizer
model = GeneEmbeddModel.from_pretrained(model_path)
model.eval()
#init tokenizer. Tokenizer will automatically get secondary structure of sequence using Vienna RNA package
tokenizer = RnaTokenizer.from_pretrained(model_path,model_name=model_name)
output = tokenizer(['AAAGTCGGAGGTTCGAAGACGATCAGATAC','TTTTCGGAACTGAGGCCATGATTAAGAGGG'])
#inference
#gene_embedds and second input embedds are the latent space representation of the input sequence and the second input respectively.
#In this case, the second input would be the secondary structure of the sequence
gene_embedd, second_input_embedd, activations,attn_scores_first,attn_scores_second = \
model(output['input_ids'])
#get sub class labels
sub_class_labels = model.convert_ids_to_labels(activations)
#get major class labels
major_class_labels = model.convert_subclass_to_majorclass(sub_class_labels)
```
|
pittawat/q-Taxi-v3-eval | pittawat | "2023-02-06T07:09:51Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-06T07:09:48Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-eval
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pittawat/q-Taxi-v3-eval", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
FarhanAkhtar/Llama-3B-cot-kaggle-new1 | FarhanAkhtar | "2025-02-19T10:19:42Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | "2025-02-18T10:29:39Z" | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** FarhanAkhtar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pedropauletti/whisper-small-pt | pedropauletti | "2023-09-17T20:06:06Z" | 80 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-09-17T18:03:54Z" | ---
language:
- pt
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Pt - Pedro Pauletti
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: pt
split: test
args: pt
metrics:
- name: Wer
type: wer
value: 12.154569053330267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Pt - Pedro Pauletti
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2550
- Wer Ortho: 17.7888
- Wer: 12.1546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.2456 | 0.28 | 500 | 0.2550 | 17.7888 | 12.1546 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LHRuig/ferreroroch | LHRuig | "2025-01-20T14:29:57Z" | 5 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-20T14:29:39Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ferreroroch
---
# ferreroroch
<Gallery />
## Model description
ferreroroch lora
## Trigger words
You should use `ferreroroch` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/ferreroroch/tree/main) them in the Files & versions tab.
|
alvarobartt/bert-base-multilingual-cased-ner-spanish | alvarobartt | "2024-09-02T07:13:43Z" | 61 | 2 | span-marker | [
"span-marker",
"pytorch",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"es",
"dataset:xtreme",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:cc-by-4.0",
"model-index",
"region:us"
] | token-classification | "2023-09-28T09:56:53Z" | ---
language:
- es
license: cc-by-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
datasets:
- xtreme
metrics:
- precision
- recall
- f1
widget:
- text: Me llamo Álvaro y vivo en Barcelona (España).
- text: Marie Curie fue profesora en la Universidad de Paris.
- text: La Universidad de Salamanca es la universidad en activo más antigua de España.
pipeline_tag: token-classification
base_model: bert-base-multilingual-cased
model-index:
- name: SpanMarker with bert-base-multilingual-cased on xtreme/PAN-X.es
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: xtreme/PAN-X.es
type: xtreme
split: eval
metrics:
- type: f1
value: 0.9186626746506986
name: F1
- type: precision
value: 0.9231154938993816
name: Precision
- type: recall
value: 0.9142526071842411
name: Recall
---
# SpanMarker with bert-base-multilingual-cased on xtreme/PAN-X.es
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [xtreme/PAN-X.es](https://huggingface.co/datasets/xtreme) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
- **Maximum Sequence Length:** 512 tokens
- **Maximum Entity Length:** 8 words
- **Training Dataset:** [xtreme/PAN-X.es](https://huggingface.co/datasets/xtreme)
- **Languages:** es
- **License:** cc-by-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------|
| LOC | "Salamanca", "Paris", "Barcelona (España)" |
| ORG | "ONU", "Fútbol Club Barcelona", "Museo Nacional del Prado" |
| PER | "Fray Luis de León", "Leo Messi", "Álvaro Bartolomé" |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("alvarobartt/bert-base-multilingual-cased-ner-spanish")
# Run inference
entities = model.predict("Marie Curie fue profesora en la Universidad de Paris.")
```
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:-------|:----|
| Sentence length | 3 | 6.4642 | 64 |
| Entities per sentence | 1 | 1.2375 | 24 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.3998 | 1000 | 0.0388 | 0.8761 | 0.8641 | 0.8701 | 0.9223 |
| 0.7997 | 2000 | 0.0326 | 0.8995 | 0.8740 | 0.8866 | 0.9341 |
| 1.1995 | 3000 | 0.0277 | 0.9076 | 0.9019 | 0.9047 | 0.9424 |
| 1.5994 | 4000 | 0.0261 | 0.9143 | 0.9113 | 0.9128 | 0.9473 |
| 1.9992 | 5000 | 0.0234 | 0.9231 | 0.9143 | 0.9187 | 0.9502 |
### Framework Versions
- Python: 3.10.12
- SpanMarker: 1.3.1.dev
- Transformers: 4.33.3
- PyTorch: 2.0.1+cu118
- Datasets: 2.14.5
- Tokenizers: 0.13.3
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF | mradermacher | "2025-03-30T06:26:50Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"bangla",
"banglaLLM",
"banglaNLP",
"LLM",
"LLama",
"Transformer",
"nlp",
"bengali",
"bn",
"en",
"dataset:uonlp/CulturaX",
"base_model:BanglaLLM/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1",
"base_model:quantized:BanglaLLM/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | "2025-03-30T06:17:56Z" | ---
base_model: BanglaLLM/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1
datasets:
- uonlp/CulturaX
language:
- bn
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
tags:
- bangla
- banglaLLM
- banglaNLP
- LLM
- LLama
- Transformer
- nlp
- bengali
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/BanglaLLM/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q6_K.gguf) | Q6_K | 1.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1-GGUF/resolve/main/BanglaLLama-3.2-1b-unolp-culturax-base-v0.0.1.f16.gguf) | f16 | 3.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
PrunaAI/bond005-meno-tiny-0.1-bnb-8bit-smashed | PrunaAI | "2024-12-11T09:13:47Z" | 5 | 0 | null | [
"safetensors",
"qwen2",
"pruna-ai",
"base_model:bond005/meno-tiny-0.1",
"base_model:quantized:bond005/meno-tiny-0.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-12-11T09:11:34Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: bond005/meno-tiny-0.1
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bond005/meno-tiny-0.1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/bond005-meno-tiny-0.1-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("bond005/meno-tiny-0.1")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bond005/meno-tiny-0.1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
Sakalti/ultiima-72B | Sakalti | "2025-02-02T12:13:08Z" | 1,990 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"zh",
"fr",
"es",
"pt",
"de",
"it",
"ru",
"ja",
"ko",
"vi",
"th",
"ar",
"fa",
"he",
"tr",
"cs",
"pl",
"hi",
"bn",
"ur",
"id",
"ms",
"lo",
"my",
"ceb",
"km",
"tl",
"nl",
"arxiv:2306.01708",
"base_model:Qwen/Qwen2.5-72B",
"base_model:merge:Qwen/Qwen2.5-72B",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:merge:Qwen/Qwen2.5-72B-Instruct",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-09T22:59:09Z" | ---
language:
- en
- zh
- fr
- es
- pt
- de
- it
- ru
- ja
- ko
- vi
- th
- ar
- fa
- he
- tr
- cs
- pl
- hi
- bn
- ur
- id
- ms
- lo
- my
- ceb
- km
- tl
- nl
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Qwen/Qwen2.5-72B-Instruct
- Qwen/Qwen2.5-72B
license_name: qwen
inference: true
model-index:
- name: ultiima-72B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.4
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-72B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 61.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-72B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 52.42
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-72B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 21.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-72B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 18.12
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-72B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.51
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-72B
name: Open LLM Leaderboard
---
Built With Qwen
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-72B-Instruct
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-72B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Sakalti__ultiima-72B-details)
| Metric |Value|
|-------------------|----:|
|Avg. |46.58|
|IFEval (0-Shot) |71.40|
|BBH (3-Shot) |61.10|
|MATH Lvl 5 (4-Shot)|52.42|
|GPQA (0-shot) |21.92|
|MuSR (0-shot) |18.12|
|MMLU-PRO (5-shot) |54.51|
|
mradermacher/Moo-i1-GGUF | mradermacher | "2025-03-20T06:50:51Z" | 32 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Troy-Codes/Moo",
"base_model:quantized:Troy-Codes/Moo",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-03-20T02:50:42Z" | ---
base_model: Troy-Codes/Moo
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Troy-Codes/Moo
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Moo-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ1_S.gguf) | i1-IQ1_S | 8.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ1_M.gguf) | i1-IQ1_M | 9.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ2_S.gguf) | i1-IQ2_S | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ2_M.gguf) | i1-IQ2_M | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q2_K_S.gguf) | i1-Q2_K_S | 13.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q2_K.gguf) | i1-Q2_K | 14.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 15.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ3_XS.gguf) | i1-IQ3_XS | 16.1 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q3_K_S.gguf) | i1-Q3_K_S | 17.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ3_S.gguf) | i1-IQ3_S | 17.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ3_M.gguf) | i1-IQ3_M | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q3_K_M.gguf) | i1-Q3_K_M | 18.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q3_K_L.gguf) | i1-Q3_K_L | 20.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-IQ4_XS.gguf) | i1-IQ4_XS | 20.9 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q4_0.gguf) | i1-Q4_0 | 22.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q4_K_S.gguf) | i1-Q4_K_S | 22.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q4_K_M.gguf) | i1-Q4_K_M | 23.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q4_1.gguf) | i1-Q4_1 | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q5_K_S.gguf) | i1-Q5_K_S | 26.8 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q5_K_M.gguf) | i1-Q5_K_M | 27.6 | |
| [GGUF](https://huggingface.co/mradermacher/Moo-i1-GGUF/resolve/main/Moo.i1-Q6_K.gguf) | i1-Q6_K | 31.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tensorblock/falcon-7b-sharded-bf16-GGUF | tensorblock | "2024-11-16T01:24:06Z" | 14 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:quantized:ybelkada/falcon-7b-sharded-bf16",
"endpoints_compatible",
"region:us"
] | null | "2024-11-13T00:06:41Z" | ---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## ybelkada/falcon-7b-sharded-bf16 - GGUF
This repo contains GGUF format model files for [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [falcon-7b-sharded-bf16-Q2_K.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q2_K.gguf) | Q2_K | 3.595 GB | smallest, significant quality loss - not recommended for most purposes |
| [falcon-7b-sharded-bf16-Q3_K_S.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q3_K_S.gguf) | Q3_K_S | 3.595 GB | very small, high quality loss |
| [falcon-7b-sharded-bf16-Q3_K_M.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q3_K_M.gguf) | Q3_K_M | 3.856 GB | very small, high quality loss |
| [falcon-7b-sharded-bf16-Q3_K_L.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q3_K_L.gguf) | Q3_K_L | 4.078 GB | small, substantial quality loss |
| [falcon-7b-sharded-bf16-Q4_0.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q4_0.gguf) | Q4_0 | 3.922 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [falcon-7b-sharded-bf16-Q4_K_S.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q4_K_S.gguf) | Q4_K_S | 4.420 GB | small, greater quality loss |
| [falcon-7b-sharded-bf16-Q4_K_M.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q4_K_M.gguf) | Q4_K_M | 4.633 GB | medium, balanced quality - recommended |
| [falcon-7b-sharded-bf16-Q5_0.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q5_0.gguf) | Q5_0 | 4.727 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [falcon-7b-sharded-bf16-Q5_K_S.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q5_K_S.gguf) | Q5_K_S | 4.976 GB | large, low quality loss - recommended |
| [falcon-7b-sharded-bf16-Q5_K_M.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q5_K_M.gguf) | Q5_K_M | 5.338 GB | large, very low quality loss - recommended |
| [falcon-7b-sharded-bf16-Q6_K.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q6_K.gguf) | Q6_K | 6.548 GB | very large, extremely low quality loss |
| [falcon-7b-sharded-bf16-Q8_0.gguf](https://huggingface.co/tensorblock/falcon-7b-sharded-bf16-GGUF/blob/main/falcon-7b-sharded-bf16-Q8_0.gguf) | Q8_0 | 7.145 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/falcon-7b-sharded-bf16-GGUF --include "falcon-7b-sharded-bf16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/falcon-7b-sharded-bf16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
mikeendale/Customer-Service | mikeendale | "2024-09-13T15:26:35Z" | 9 | 0 | null | [
"safetensors",
"gpt2",
"pytorch",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-09-13T13:23:43Z" | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the_pile
pipeline_tag: text-generation
---
# trained on Cerebras-GPT 111M for customer service abilities
Check out there [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **111M** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-111M")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-111M")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for detail.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible. |
aeyae/ppo-snowb | aeyae | "2024-02-19T17:52:27Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-02-19T17:52:24Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aeyae/ppo-snowb
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AigarciabFabero/PPO_LunarLander_1e6 | AigarciabFabero | "2025-03-24T17:35:55Z" | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-03-21T18:53:01Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/AigarciabFabero/PPO_LunarLander_1e6/ce44fd08ee70a9ee9192de4be15786f984068710/README.md?%2FAigarciabFabero%2FPPO_LunarLander_1e6%2Fresolve%2Fmain%2FREADME.md=&etag=%2242990676c1a956aa1f1a48ab76e8b4a8019883f6%22 |
Qwen/Qwen1.5-32B-Chat-GGUF | Qwen | "2024-04-09T16:47:47Z" | 2,216 | 52 | null | [
"gguf",
"chat",
"text-generation",
"en",
"arxiv:2309.16609",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-04-04T09:05:43Z" | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-32B-Chat-GGUF/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-32B-Chat-GGUF
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
In this repo, we provide quantized models in the GGUF formats, including `q2_k`, `q3_k_m`, `q4_0`, `q4_k_m`, `q5_0`, `q5_k_m`, `q6_k` and `q8_0`.
To demonstrate their model quality, we follow [`llama.cpp`](https://github.com/ggerganov/llama.cpp) to evaluate their perplexity on wiki test set. Results are shown below:
|Size | fp16 | q8_0 | q6_k | q5_k_m | q5_0 | q4_k_m | q4_0 | q3_k_m | q2_k |
|--------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
|0.5B | 34.20 | 34.22 | 34.31 | 33.80 | 34.02 | 34.27 | 36.74 | 38.25 | 62.14 |
|1.8B | 15.99 | 15.99 | 15.99 | 16.09 | 16.01 | 16.22 | 16.54 | 17.03 | 19.99 |
|4B | 13.20 | 13.21 | 13.28 | 13.24 | 13.27 | 13.61 | 13.44 | 13.67 | 15.65 |
|7B | 14.21 | 14.24 | 14.35 | 14.32 | 14.12 | 14.35 | 14.47 | 15.11 | 16.57 |
|14B | 10.91 | 10.91 | 10.93 | 10.98 | 10.88 | 10.92 | 10.92 | 11.24 | 12.27 |
|32B | 8.87 | 8.89 | 8.91 | 8.94 | 8.93 | 8.96 | 9.17 | 9.14 | 10.51 |
|72B | 7.97 | 7.99 | 7.99 | 7.99 | 8.01 | 8.00 | 8.01 | 8.06 | 8.63 |
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide.
## How to use
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
```shell
huggingface-cli download Qwen/Qwen1.5-32B-Chat-GGUF qwen1_5-32b-chat-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
```
We demonstrate how to use `llama.cpp` to run Qwen1.5:
```shell
./main -m qwen1_5-32b-chat-q5_k_m.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
mradermacher/Falcon3-1B-MentalHealth-GGUF | mradermacher | "2025-02-10T00:20:02Z" | 240 | 0 | transformers | [
"transformers",
"gguf",
"mentalhealth",
"selfcare",
"wellness",
"wellbeing",
"depression",
"anxiety",
"stress",
"emotionalsupport",
"mentalsupport",
"advisor",
"en",
"dataset:marmikpandya/mental-health",
"base_model:ShivomH/Falcon3-1B-MentalHealth",
"base_model:quantized:ShivomH/Falcon3-1B-MentalHealth",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-09T23:56:52Z" | ---
base_model: ShivomH/Falcon3-1B-MentalHealth
datasets:
- marmikpandya/mental-health
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mentalhealth
- selfcare
- wellness
- wellbeing
- depression
- anxiety
- stress
- emotionalsupport
- mentalsupport
- advisor
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ShivomH/Falcon3-1B-MentalHealth
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q5_K_M.gguf) | Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-1B-MentalHealth-GGUF/resolve/main/Falcon3-1B-MentalHealth.f16.gguf) | f16 | 3.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
daniel40/287392a8-31a5-4709-8e2e-a8e59f53490c | daniel40 | "2025-02-05T02:57:01Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T02:52:02Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 287392a8-31a5-4709-8e2e-a8e59f53490c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3140ed3b7bf340bc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3140ed3b7bf340bc_train_data.json
type:
field_input: text
field_instruction: question
field_output: attempt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/287392a8-31a5-4709-8e2e-a8e59f53490c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3140ed3b7bf340bc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ce57b0b0-8e3a-467b-9f2a-755c5c4a621f
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: ce57b0b0-8e3a-467b-9f2a-755c5c4a621f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 287392a8-31a5-4709-8e2e-a8e59f53490c
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0017 | 1 | 1.0610 |
| 0.6035 | 0.0875 | 50 | 0.6202 |
| 0.5688 | 0.1749 | 100 | 0.5588 |
| 0.5092 | 0.2624 | 150 | 0.5236 |
| 0.5048 | 0.3498 | 200 | 0.5142 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
chvas37/gpt2-finetuned-wikitext2 | chvas37 | "2023-12-29T09:32:08Z" | 5 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-29T09:11:17Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: chvas37/gpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# chvas37/gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.4997
- Validation Loss: 6.3483
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.3137 | 6.7732 | 0 |
| 6.4997 | 6.3483 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
YesIfwRONG/Zero | YesIfwRONG | "2022-12-09T02:48:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-12-09T02:48:01Z" | This is a capstone project serving for training the model and exploring implementation on AIs. |
innovation64/lunralandsss | innovation64 | "2023-05-29T12:52:33Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-29T12:52:28Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -172.59 +/- 101.65
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'innovation64/lunralandsss'
'batch_size': 512
'minibatch_size': 128}
```
|
StepLaw/StepLaw-N_429M-D_49.0B-LR1.953e-03-BS262144 | StepLaw | "2025-04-15T16:04:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-11T12:49:14Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mohamedabdalrhmanhassan/intakafo-style | mohamedabdalrhmanhassan | "2025-04-16T17:12:10Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-16T16:42:55Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: intakafo-style
---
# Intakafo Style
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `intakafo-style` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "intakafo-style",
"lora_weights": "https://huggingface.co/mohamedabdalrhmanhassan/intakafo-style/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mohamedabdalrhmanhassan/intakafo-style', weight_name='lora.safetensors')
image = pipeline('intakafo-style').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/mohamedabdalrhmanhassan/intakafo-style/discussions) to add images that show off what you’ve made with this LoRA.
|
MinaMila/phi3_GermanCredit_6ep_42_newversion | MinaMila | "2025-03-14T01:25:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-14T01:22:33Z" | ---
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF | mradermacher | "2025-03-14T12:59:41Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"torch",
"trl",
"unsloth",
"llama",
"en",
"hi",
"dataset:student-abdullah/BigPharma_Generic_Q-A_Format_Augemented_Hinglish_Dataset",
"base_model:student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09",
"base_model:quantized:student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-14T12:54:24Z" | ---
base_model: student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09
datasets:
- student-abdullah/BigPharma_Generic_Q-A_Format_Augemented_Hinglish_Dataset
language:
- en
- hi
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- torch
- trl
- unsloth
- llama
- gguf
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09-GGUF/resolve/main/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
waffox/lora_model | waffox | "2024-05-21T15:28:07Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-21T12:50:33Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** waffox
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
filipesantoscv11/9748d00e-397c-4d4f-a5de-b7857c7fa01f | filipesantoscv11 | "2025-01-21T11:37:59Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | "2025-01-21T11:35:25Z" | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9748d00e-397c-4d4f-a5de-b7857c7fa01f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fdd56d09ce656747_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fdd56d09ce656747_train_data.json
type:
field_instruction: INSTRUCTION
field_output: RESPONSE
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: filipesantoscv11/9748d00e-397c-4d4f-a5de-b7857c7fa01f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/fdd56d09ce656747_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fecef9ac-e0fb-4174-87a6-ec0f3fcd1777
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fecef9ac-e0fb-4174-87a6-ec0f3fcd1777
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 9748d00e-397c-4d4f-a5de-b7857c7fa01f
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0020 | 1 | nan |
| 0.0 | 0.0098 | 5 | nan |
| 0.0 | 0.0196 | 10 | nan |
| 0.0 | 0.0294 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
danielhanchen/tinyllama_lora_new_20022024 | danielhanchen | "2024-02-20T07:48:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-02-20T07:48:07Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** danielhanchen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
timzb/dbl_pretrained | timzb | "2023-11-01T21:01:55Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | "2023-11-01T20:15:48Z" | ---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
saneowl/test-001 | saneowl | "2024-04-02T06:45:04Z" | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2024-04-02T06:35:43Z" | ---
library_name: peft
base_model: google/gemma-2b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
0x70DA/t5-v1_1-base-abs_qa | 0x70DA | "2023-06-24T02:27:41Z" | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-03-25T05:13:01Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: t5-v1_1-base-abs_qa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-v1_1-base-abs_qa
This model is a fine-tuned version of [MahmoudH/t5-v1_1-base-finetuned-sci_summ](https://huggingface.co/MahmoudH/t5-v1_1-base-finetuned-sci_summ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5545
- Validation Loss: 0.6041
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 87288, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8647 | 0.6401 | 0 |
| 0.6569 | 0.6209 | 1 |
| 0.5545 | 0.6041 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
aleegis09/391fb8f4-a085-49c7-802d-de93393d94a3 | aleegis09 | "2025-01-21T18:15:14Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | "2025-01-21T17:35:37Z" | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 391fb8f4-a085-49c7-802d-de93393d94a3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- b5e06bf0e602bd38_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b5e06bf0e602bd38_train_data.json
type:
field_instruction: section
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis09/391fb8f4-a085-49c7-802d-de93393d94a3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b5e06bf0e602bd38_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5312563a-16b4-452e-84f7-611f95b514ff
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5312563a-16b4-452e-84f7-611f95b514ff
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 391fb8f4-a085-49c7-802d-de93393d94a3
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.7858 | 0.0039 | 1 | 2.6465 |
| 9.792 | 0.1967 | 50 | 2.4238 |
| 9.966 | 0.3933 | 100 | 2.3770 |
| 10.0198 | 0.5900 | 150 | 2.3442 |
| 10.0113 | 0.7866 | 200 | 2.3351 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Victoriayu/clip-dpo-0.1-0.000005-0.2-ultra | Victoriayu | "2025-04-03T06:39:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-03T06:29:25Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
xysmalobia/test-trainer | xysmalobia | "2023-11-30T20:07:57Z" | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
base_model: bert-base-uncased
model-index:
- name: test-trainer
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.8504901960784313
name: Accuracy
- type: f1
value: 0.893542757417103
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5802
- Accuracy: 0.8505
- F1: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4443 | 0.8039 | 0.8485 |
| 0.5584 | 2.0 | 918 | 0.3841 | 0.8431 | 0.8810 |
| 0.3941 | 3.0 | 1377 | 0.5802 | 0.8505 | 0.8935 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
bella05/pogny_10_64_0.01 | bella05 | "2024-06-03T23:50:21Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-03T19:35:38Z" | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: pogny_10_64_0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/bella05/huggingface/runs/2fqy4l1d)
# pogny_10_64_0.01
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6851
- Accuracy: 0.4376
- F1: 0.2665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 2.491 | 1.0 | 1205 | 2.5033 | 0.4376 | 0.2665 |
| 2.4679 | 2.0 | 2410 | 1.9460 | 0.4376 | 0.2665 |
| 2.302 | 3.0 | 3615 | 2.4098 | 0.0702 | 0.0092 |
| 2.1762 | 4.0 | 4820 | 2.2698 | 0.0545 | 0.0056 |
| 2.0639 | 5.0 | 6025 | 1.9917 | 0.4376 | 0.2665 |
| 2.0031 | 6.0 | 7230 | 1.9130 | 0.4376 | 0.2665 |
| 1.9241 | 7.0 | 8435 | 2.0131 | 0.4376 | 0.2665 |
| 1.8227 | 8.0 | 9640 | 1.8212 | 0.4376 | 0.2665 |
| 1.7854 | 9.0 | 10845 | 1.7379 | 0.4376 | 0.2665 |
| 1.7037 | 10.0 | 12050 | 1.6851 | 0.4376 | 0.2665 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jerseyjerry/task-5-microsoft-Phi-3-mini-4k-instruct-2 | jerseyjerry | "2025-03-18T13:37:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:other",
"region:us"
] | null | "2025-03-18T13:33:55Z" | ---
library_name: peft
license: other
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the flock_task5_train dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0 |
kk-aivio/b8048ca1-3f27-42a2-aabc-e3250684f40d | kk-aivio | "2025-02-20T12:35:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | "2025-02-20T12:15:38Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b8048ca1-3f27-42a2-aabc-e3250684f40d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b8048ca1-3f27-42a2-aabc-e3250684f40d
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dt-and-vanilla-ardt/ardt-vanilla-robust_train_walker2d_level-0209_0437-66 | dt-and-vanilla-ardt | "2023-09-02T05:08:32Z" | 35 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2023-09-02T03:38:44Z" | ---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-robust_train_walker2d_level-0209_0437-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-robust_train_walker2d_level-0209_0437-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Aditi1919/hindi_lora_chat_model_Biology | Aditi1919 | "2024-12-23T06:53:23Z" | 35 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-20T10:06:15Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Aditi1919
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/llamafy_-_Qwen-Qwen2.5-7B-Instruct-llamafied-4bits | RichardErkhov | "2025-02-12T00:18:27Z" | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-12T00:10:54Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen-Qwen2.5-7B-Instruct-llamafied - bnb 4bits
- Model creator: https://huggingface.co/llamafy/
- Original model: https://huggingface.co/llamafy/Qwen-Qwen2.5-7B-Instruct-llamafied/
Original model description:
---
base_model: Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
---
# Qwen/Qwen2.5-7B-Instruct (llamafied)
This is a version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) converted to the Llama format. It should be compatible with all programs that support Llama.
|
sail-rvc/owain2333333 | sail-rvc | "2023-07-14T07:42:09Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:41:54Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# owain2333333
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:42:09
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
yshen99/ZhiGuoLiZheng-GPT2 | yshen99 | "2023-04-02T21:43:03Z" | 536 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-12-14T16:54:36Z" | ---
license: mit
widget:
- text: "要进一步加强党风廉政建设"
example_title: "example 1"
- text: "要落实全面建成"
example_title: "example 2"
---
GPT2 model fine-tuned with Chinese political text.
|
Shangding-Gu/llama-3-1-8b-math-orca-qlora-10k-ep1 | Shangding-Gu | "2025-04-07T21:47:07Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T21:30:16Z" | ---
base_model: Meta-Llama/Meta-Llama-3.1-8B
library_name: transformers
model_name: llama-3-1-8b-math-orca-qlora-10k-ep1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3-1-8b-math-orca-qlora-10k-ep1
This model is a fine-tuned version of [Meta-Llama/Meta-Llama-3.1-8B](https://huggingface.co/Meta-Llama/Meta-Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Shangding-Gu/llama-3-1-8b-math-orca-qlora-10k-ep1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
laion/CLIP-ViT-B-32-xlm-roberta-base-laion5B-s13B-b90k | laion | "2022-11-14T16:18:01Z" | 66,499 | 11 | open_clip | [
"open_clip",
"arxiv:1910.04867",
"license:mit",
"region:us"
] | null | "2022-11-13T16:37:55Z" | ---
license: mit
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card for CLIP ViT-B/32 xlm roberta base - LAION-5B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
A CLIP ViT-B/32 xlm roberta base model trained with the LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Model training done by Romain Beaumont on the [stability.ai](https://stability.ai/) cluster.
# Uses
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
# Training Details
## Training Data
This model was trained with the full LAION-5B (https://laion.ai/blog/laion-5b/).
## Training Procedure
Training with batch size 90k for 13B sample of laion5B, see https://wandb.ai/rom1504/open-clip/reports/xlm-roberta-base-B-32--VmlldzoyOTQ5OTE2
Model is B/32 on visual side, xlm roberta base initialized with pretrained weights on text side.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The model achieves
* imagenet 1k 62.33% (vs 62.9% for baseline)
* mscoco 63.4% (vs 60.8% for baseline)
* flickr30k 86.2% (vs 85.4% for baseline)
A preliminary multilingual evaluation was run: 43% on imagenet1k italian (vs 21% for english B/32), 37% for imagenet1k japanese (vs 1% for english B/32 and 50% for B/16 clip japanese). It shows the multilingual property is indeed there as expected. Larger models will get even better performance.

# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model.
# Citation
**BibTeX:**
In addition to forthcoming LAION-5B (https://laion.ai/blog/laion-5b/) paper, please cite:
OpenAI CLIP paper
```
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
# How To Get Started With the Model
https://github.com/mlfoundations/open_clip |
Brentable/Llama-3.3-70B-Instruct-bnb-8bit | Brentable | "2025-03-16T21:06:16Z" | 0 | 0 | null | [
"safetensors",
"llama",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-16T20:17:14Z" | ---
license: llama3.3
base_model:
- meta-llama/Llama-3.3-70B-Instruct
---
|
mergekit-community/mergekit-slerp-jwxgteu | mergekit-community | "2024-09-27T21:23:01Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:merge:WizardLMTeam/WizardMath-7B-V1.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-27T21:17:39Z" | ---
base_model:
- WizardLM/WizardMath-7B-V1.1
- NousResearch/Hermes-2-Pro-Mistral-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
roleplaiapp/Qwen2.5-14B-DeepSeek-R1-1M-Uncensored-Q5_K_M-GGUF | roleplaiapp | "2025-01-29T13:03:05Z" | 112 | 0 | transformers | [
"transformers",
"gguf",
"14b",
"5-bit",
"Q5_K_M",
"deepseek",
"llama-cpp",
"qwen25",
"text-generation",
"uncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-29T13:02:23Z" | ---
library_name: transformers
pipeline_tag: text-generation
tags:
- 14b
- 5-bit
- Q5_K_M
- deepseek
- gguf
- llama-cpp
- qwen25
- text-generation
- uncensored
---
# roleplaiapp/Qwen2.5-14B-DeepSeek-R1-1M-Uncensored-Q5_K_M-GGUF
**Repo:** `roleplaiapp/Qwen2.5-14B-DeepSeek-R1-1M-Uncensored-Q5_K_M-GGUF`
**Original Model:** `Qwen2.5-14B-DeepSeek-R1-1M-Uncensored`
**Quantized File:** `Qwen2.5-14B-DeepSeek-R1-1M-Uncensored.Q5_K_M.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q5_K_M`
## Overview
This is a GGUF Q5_K_M quantized version of Qwen2.5-14B-DeepSeek-R1-1M-Uncensored
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
mlx-community/Meta-Llama-3.1-8B-Instruct-3bit | mlx-community | "2024-11-23T21:10:17Z" | 30 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"3-bit",
"region:us"
] | text-generation | "2024-11-23T21:08:32Z" | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# mlx-community/Meta-Llama-3.1-8B-Instruct-3bit
The Model [mlx-community/Meta-Llama-3.1-8B-Instruct-3bit](https://huggingface.co/mlx-community/Meta-Llama-3.1-8B-Instruct-3bit) was converted to MLX format from [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using mlx-lm version **0.19.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Meta-Llama-3.1-8B-Instruct-3bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
firdaouss07/speecht5_finetuned_darija | firdaouss07 | "2025-02-06T20:53:48Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2025-02-06T20:17:39Z" | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_darija
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_darija
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7775
- eval_model_preparation_time: 0.0101
- eval_runtime: 130.499
- eval_samples_per_second: 0.766
- eval_steps_per_second: 0.383
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
tensorblock/ChatMusician-Base-GGUF | tensorblock | "2024-12-14T15:45:08Z" | 34 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:m-a-p/ChatMusician-Base",
"base_model:quantized:m-a-p/ChatMusician-Base",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-14T15:11:51Z" | ---
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: m-a-p/ChatMusician-Base
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## m-a-p/ChatMusician-Base - GGUF
This repo contains GGUF format model files for [m-a-p/ChatMusician-Base](https://huggingface.co/m-a-p/ChatMusician-Base).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ChatMusician-Base-Q2_K.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [ChatMusician-Base-Q3_K_S.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [ChatMusician-Base-Q3_K_M.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [ChatMusician-Base-Q3_K_L.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [ChatMusician-Base-Q4_0.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ChatMusician-Base-Q4_K_S.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [ChatMusician-Base-Q4_K_M.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [ChatMusician-Base-Q5_0.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ChatMusician-Base-Q5_K_S.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [ChatMusician-Base-Q5_K_M.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [ChatMusician-Base-Q6_K.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [ChatMusician-Base-Q8_0.gguf](https://huggingface.co/tensorblock/ChatMusician-Base-GGUF/blob/main/ChatMusician-Base-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ChatMusician-Base-GGUF --include "ChatMusician-Base-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ChatMusician-Base-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
medieval-data/gliner-latin-medieval | medieval-data | "2024-06-17T15:22:17Z" | 0 | 1 | null | [
"pytorch",
"la",
"license:apache-2.0",
"region:us"
] | null | "2024-06-17T15:12:44Z" | ---
license: apache-2.0
language:
- la
---
# GLiNER Latin (Medieval)
This is a finetuned [GLiNER](https://github.com/urchade/GLiNER) model. For the base model, we used [gliner_multi-v2.1](https://huggingface.co/urchade/gliner_multi-v2.1). This was a small test to explore how well GLiNER could be finetuned on synthetic Latin data. For the synthetic medieval Latin data, see [here](https://huggingface.co/datasets/medieval-data/gliner-latin-medieval-synthetic). |
mradermacher/flammen22C-mistral-7B-GGUF | mradermacher | "2024-12-30T14:03:27Z" | 37 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/casual-conversation-DPO",
"base_model:flammenai/flammen22C-mistral-7B",
"base_model:quantized:flammenai/flammen22C-mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-30T13:43:48Z" | ---
base_model: flammenai/flammen22C-mistral-7B
datasets:
- flammenai/casual-conversation-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/flammenai/flammen22C-mistral-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/flammen22C-mistral-7B-GGUF/resolve/main/flammen22C-mistral-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jamesdolezal/thyroid-brs-v1 | jamesdolezal | "2023-03-27T03:04:41Z" | 0 | 3 | tf-keras | [
"tf-keras",
"arxiv:1610.02357",
"doi:10.57967/hf/1499",
"license:gpl-3.0",
"region:us"
] | null | "2022-11-09T16:43:31Z" | ---
license: gpl-3.0
---
# Thyroid BRAF-RAS Score (BRS) v1 Model Card
This model card describes a model associated with the manuscript "Deep learning prediction of BRAF-RAS gene expression signature identifies noninvasive follicular thyroid neoplasms with papillary-like nuclear features", by Dolezal _et al_, available [here](https://www.nature.com/articles/s41379-020-00724-3)
## Model Details
- **Developed by:** James Dolezal
- **Model type:** Deep convolutional neural network image classifier
- **Language(s):** English
- **License:** GPL-3.0
- **Model Description:** This is a model that can predict, from H&E-stained pathologic images of thyroid neoplasms, the predicted BRAF-RAS Score (BRS). BRS is a gene expression score scaled from -1 (BRAF-like) to +1 (RAS-like) indicating how similar a tumor's gene expression is to a BRAF-mutant and RAS-mutant tumor. The model is an [Xception](https://arxiv.org/abs/1610.02357) model with two dropout-enabled hidden layers.
- **Image processing:** This model expects images of H&E-stained pathology slides at 299 x 299 px and 302 x 302 μm resolution. Images should be stain-normalized using a modified Reinhard normalizer ("Reinhard-Fast") available [here](https://github.com/jamesdolezal/slideflow/blob/master/slideflow/norm/tensorflow/reinhard.py). The stain normalizer should be fit using the `target_means` and `target_stds` listed in the model `params.json` file. Images should be should be standardized with `tf.image.per_image_standardization()`.
- **Resources for more information:** [GitHub Repository](https://github.com/jamesdolezal/histologic-sheep)
# Uses
## Examples
For direct use, the model can be loaded using Tensorflow/Keras:
```
import tensorflow as tf
model = tf.keras.models.load_model('/path/')
```
or loaded with [Slideflow](https://github.com/jamesdolezal/slideflow) version 1.1+ with the following syntax:
```
import slideflow as sf
model = sf.model.load('/path/')
```
The stain normalizer can be loaded and fit using Slideflow:
```
normalizer = sf.util.get_model_normalizer('/path/')
```
The stain normalizer has a native Tensorflow transform and can be directly applied to a tf.data.Dataset:
```
# Map the stain normalizer transformation
# to a tf.data.Dataset
dataset = dataset.map(normalizer.tf_to_tf)
```
Alternatively, the model can be used to generate predictions for whole-slide images processed through Slideflow in an end-to-end [Project](https://slideflow.dev/project_setup.html). To use the model to generate predictions on data processed with Slideflow, simply pass the model to the [`Project.predict()`](https://slideflow.dev/project.html#slideflow.Project.predict) function:
```
import slideflow
P = sf.Project('/path/to/slideflow/project')
P.predict('/model/path')
```
## Direct Use
This model is intended for research purposes only. Possible research areas and tasks include
- Applications in educational settings.
- Research on pathology classification models for thyroid neoplasms.
Excluded uses are described below.
### Misuse and Out-of-Scope Use
This model should not be used in a clinical setting to generate predictions that will be used to inform patients, physicians, or any other health care members directly involved in their health care outside the context of an approved research protocol. Using the model in a clinical setting outside the context of an approved research protocol is a misuse of this model. This includes, but is not limited to:
- Generating predictions of images from a patient's tumor and sharing those predictions with the patient
- Generating predictions of images from a patient's tumor and sharing those predictions with the patient's physician, or other members of the patient's healthcare team
- Influencing a patient's health care treatment in any way based on output from this model
### Limitations
The model has not been validated in contexts where non-thyroid neoplasms, or rare thyroid subtypes such as anaplastic thyroid carcinoma, are possible.
### Bias
This model was trained on The Cancer Genome Atlas (TCGA), which contains patient data from communities and cultures which may not reflect the general population. This datasets is comprised of images from multiple institutions, which may introduce a potential source of bias from site-specific batch effects ([Howard, 2021](https://www.nature.com/articles/s41467-021-24698-1)).
## Training
**Training Data**
The following dataset was used to train the model:
- The Cancer Genome Atlas (TCGA), THCA cohort (see next section)
This model was trained on a total of 369 slides, with 116 BRAF-like tumors and 271 RAS-like tumors.
**Training Procedure**
Each whole-slide image was sectioned into smaller images in a grid-wise fashion in order to extract tiles from whole-slide images at 302 x 302 μm. Image tiles were extracted at the nearest downsample layer, and resized to 299 x 299 px using [Libvips](https://www.libvips.org/API/current/libvips-resample.html#vips-resize). During training,
- Images are stain-normalized with a modified Reinhard normalizer ("Reinhard-Fast"), which excludes the brightness standardization step, available [here](https://github.com/jamesdolezal/slideflow/blob/master/slideflow/norm/tensorflow/reinhard.py)
- Images are randomly flipped and rotated (90, 180, 270)
- Images have a 50% chance of being JPEG compressed with quality level between 50-100%
- Images have a 10% chance of random Gaussian blur, with sigma between 0.5-2.0
- Images are standardized with `tf.image.per_image_standardization()`
- Images are classified through an Xception block, followed by two hidden layers with dropout (p=0.1) enabled during training
- The loss is mean squared error using the linear outcome BRS
- Training is completed after 1 epoch
Additional training information:
- **Hardware:** 1 x A100 GPUs
- **Optimizer:** Adam
- **Batch:** 128
- **Learning rate:** 0.0001, with a decay of 0.98 every 512 steps
- **Hidden layers:** 2 hidden layers of width 1024, with dropout p=0.1
## Evaluation Results
External evaluation results are currently under peer review and will be posted once publicly available. |
huggingtweets/podsaveamerica | huggingtweets | "2021-05-22T18:56:38Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/podsaveamerica/1606408643346/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1287785491572461586/NzewkuRV_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Pod Save America 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@podsaveamerica bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@podsaveamerica's tweets](https://twitter.com/podsaveamerica).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3196</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1932</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>71</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1193</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jgm46l9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @podsaveamerica's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33ygk7cf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33ygk7cf/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/podsaveamerica'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
unsloth/Llama-3.2-11B-Vision-Instruct-bnb-4bit | unsloth | "2024-12-10T02:49:03Z" | 17,623 | 71 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"multimodal",
"vision",
"pytorch",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-11B-Vision-Instruct",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | "2024-09-25T19:37:28Z" | ---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- multimodal
- vision
- pytorch
---
## ***See [our collection](https://huggingface.co/collections/unsloth/vision-multimodal-models-673eb9908fc2cb3deebd2fa3) for vision models including Llama 3.2, Llava, Qwen2-VL and Pixtral.***
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 Vision (11B) here: https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.2-11B-Vision-Instruct-bnb-4bit
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
Mhmd2023/arabic1-Qwen2.5_7B-Alpaca | Mhmd2023 | "2025-02-16T21:01:21Z" | 0 | 0 | null | [
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | "2025-02-16T20:51:47Z" | ---
license: mit
tags:
- unsloth
---
|
KomeijiForce/Incubator-llama-2-7b | KomeijiForce | "2024-10-07T15:09:08Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-27T00:31:12Z" | ---
license: mit
---
# [EMNLP 2024] Incubating Text Classifiers Following User Instruction with Nothing but LLM
Incubator allows users to get a personalized classifier with only the instruction as input. The incubation is based on a llama-2-7b fine-tuned on Huggingface Meta Data and Self-Diversification.
For usage, please visit the github repo: [https://github.com/KomeijiForce/Incubator](https://github.com/KomeijiForce/Incubator)

|
yaspnya/students_scores_model | yaspnya | "2024-12-08T18:57:51Z" | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-08T16:51:41Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: students_scores_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# students_scores_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 141 | 1.0321 | 0.5313 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.21.0
|
gagagiga/Llama-3-MAAL-8B-Instruct-v0.1-Q4_K_M-GGUF | gagagiga | "2024-05-01T03:26:44Z" | 4 | 0 | null | [
"gguf",
"facebook",
"meta",
"llama",
"llama-3",
"llama-3-ko",
"llama-cpp",
"gguf-my-repo",
"en",
"ko",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-01T03:26:31Z" | ---
language:
- en
- ko
license: llama3
tags:
- facebook
- meta
- llama
- llama-3
- llama-3-ko
- llama-cpp
- gguf-my-repo
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
---
# gagagiga/Llama-3-MAAL-8B-Instruct-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`maum-ai/Llama-3-MAAL-8B-Instruct-v0.1`](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo gagagiga/Llama-3-MAAL-8B-Instruct-v0.1-Q4_K_M-GGUF --model llama-3-maal-8b-instruct-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo gagagiga/Llama-3-MAAL-8B-Instruct-v0.1-Q4_K_M-GGUF --model llama-3-maal-8b-instruct-v0.1.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-maal-8b-instruct-v0.1.Q4_K_M.gguf -n 128
```
|
RichardErkhov/nvidia_-_Llama3-ChatQA-1.5-8B-8bits | RichardErkhov | "2024-05-12T20:44:54Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2401.10225",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-12T20:36:13Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama3-ChatQA-1.5-8B - bnb 8bits
- Model creator: https://huggingface.co/nvidia/
- Original model: https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B/
Original model description:
---
license: llama3
language:
- en
pipeline_tag: text-generation
tags:
- nvidia
- chatqa-1.5
- chatqa
- llama-3
- pytorch
---
## Model Details
We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from [ChatQA (1.0)](https://arxiv.org/abs/2401.10225), and it is built on top of [Llama-3 base model](https://huggingface.co/meta-llama/Meta-Llama-3-8B). Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format. **For more information about ChatQA, check the [website](https://chatqa-project.github.io/)!**
## Other Resources
[Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Evaluation Data](https://huggingface.co/datasets/nvidia/ChatRAG-Bench)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)   [Website](https://chatqa-project.github.io/)   [Paper](https://arxiv.org/abs/2401.10225)
## Benchmark Results
Results in [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) are as follows:
| | ChatQA-1.0-7B | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
| Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 38.9 | 39.33 | 41.26 |
| QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 41.82 | 39.73 | 38.82 |
| QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 48.05 | 49.03 | 51.40 |
| CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 78.57 | 76.46 | 78.44 |
| DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 51.94 | 49.6 | 50.67 |
| ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 73.69 | 78.46 | 81.88 |
| SQA | 61.87 | 74.07 | 69.61 | 79.21 | 69.14 | 73.28 | 83.82 |
| TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 50.98 | 49.96 | 55.63 |
| HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 56.44 | 65.76 | 68.27 |
| INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 31.9 | 30.1 | 32.31 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ChatRAG Bench can be found [here](https://huggingface.co/datasets/nvidia/ChatRAG-Bench).
## Prompt Format
**We highly recommend that you use the prompt format we provide, as follows:**
### when context is available
<pre>
System: {System}
{Context}
User: {Question}
Assistant: {Response}
User: {Question}
Assistant:
</pre>
### when context is not available
<pre>
System: {System}
User: {Question}
Assistant: {Response}
User: {Question}
Assistant:
</pre>
**The content of the system's turn (i.e., {System}) for both scenarios is as follows:**
<pre>
This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context.
</pre>
**Note that our ChatQA-1.5 models are optimized for the capability with context, e.g., over documents or retrieved context.**
## How to use
### take the whole document as context
This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "nvidia/Llama3-ChatQA-1.5-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"}
]
document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |"""
def get_formatted_input(messages, context):
system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context."
instruction = "Please give a full and complete answer for the question."
for item in messages:
if item['role'] == "user":
## only apply this instruction for the first user turn
item['content'] = instruction + " " + item['content']
break
conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:"
formatted_input = system + "\n\n" + context + "\n\n" + conversation
return formatted_input
formatted_input = get_formatted_input(messages, document)
tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### run retrieval to get top-n chunks as context
This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B/tree/main/docs) for users to play with.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
import torch
import json
## load ChatQA-1.5 tokenizer and model
model_id = "nvidia/Llama3-ChatQA-1.5-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
## load retriever tokenizer and model
retriever_tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder')
query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder')
context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder')
## prepare documents, we take landrover car manual document that we provide as an example
chunk_list = json.load(open("docs.json"))['landrover']
messages = [
{"role": "user", "content": "how to connect the bluetooth in the car?"}
]
### running retrieval
## convert query into a format as follows:
## user: {user}\nagent: {agent}\nuser: {user}
formatted_query_for_retriever = '\n'.join([turn['role'] + ": " + turn['content'] for turn in messages]).strip()
query_input = retriever_tokenizer(formatted_query_for_retriever, return_tensors='pt')
ctx_input = retriever_tokenizer(chunk_list, padding=True, truncation=True, max_length=512, return_tensors='pt')
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
## Compute similarity scores using dot product and rank the similarity
similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx)
ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx)
## get top-n chunks (n=5)
retrieved_chunks = [chunk_list[idx] for idx in ranked_results.tolist()[0][:5]]
context = "\n\n".join(retrieved_chunks)
### running text generation
formatted_input = get_formatted_input(messages, context)
tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Correspondence to
Zihan Liu ([email protected]), Wei Ping ([email protected])
## Citation
<pre>
@article{liu2024chatqa,
title={ChatQA: Building GPT-4 Level Conversational QA Models},
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
journal={arXiv preprint arXiv:2401.10225},
year={2024}}
</pre>
## License
The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
|
ckpt/stable-diffusion-3.5-medium | ckpt | "2024-10-29T15:29:59Z" | 983 | 8 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"en",
"arxiv:2403.03206",
"license:other",
"diffusers:StableDiffusion3Pipeline",
"region:us"
] | text-to-image | "2024-10-29T15:20:15Z" | ---
license: other
license_name: stabilityai-ai-community
license_link: LICENSE.md
tags:
- text-to-image
- stable-diffusion
- diffusers
inference: true
extra_gated_prompt: >-
By clicking "Agree", you agree to the [License
Agreement](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium/blob/main/LICENSE.md)
and acknowledge Stability AI's [Privacy
Policy](https://stability.ai/privacy-policy).
extra_gated_fields:
Name: text
Email: text
Country: country
Organization or Affiliation: text
Receive email updates and promotions on Stability AI products, services, and research?:
type: select
options:
- 'Yes'
- 'No'
What do you intend to use the model for?:
type: select
options:
- Research
- Personal use
- Creative Professional
- Startup
- Enterprise
I agree to the License Agreement and acknowledge Stability AI's Privacy Policy: checkbox
language:
- en
pipeline_tag: text-to-image
---
# Stable Diffusion 3.5 Medium

## Model

[Stable Diffusion 3.5 Medium](https://stability.ai/news/introducing-stable-diffusion-3-5) is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
Please note: This model is released under the [Stability Community License](https://stability.ai/community-license-agreement). Visit [Stability AI](https://stability.ai/license) to learn or [contact us](https://stability.ai/enterprise) for commercial licensing details.
### Model Description
- **Developed by:** Stability AI
- **Model type:** MMDiT-X text-to-image generative model
- **Model Description:** This model generates images based on text prompts. It is a Multimodal Diffusion Transformer
(https://arxiv.org/abs/2403.03206) with improvements that use three fixed, pretrained text encoders, with QK-normalization to improve training stability, and dual attention blocks in the first 12 transformer layers.
### License
- **Community License:** Free for research, non-commercial, and commercial use for organizations or individuals with less than $1M in total annual revenue. More details can be found in the [Community License Agreement](https://stability.ai/community-license-agreement). Read more at https://stability.ai/license.
- **For individuals and organizations with annual revenue above $1M**: please [contact us](https://stability.ai/enterprise) to get an Enterprise License.
### Model Sources
For local or self-hosted use, we recommend [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for node-based UI inference, or [diffusers](https://github.com/huggingface/diffusers) or [GitHub](https://github.com/Stability-AI/sd3.5) for programmatic use.
- **ComfyUI:** [Github](https://github.com/comfyanonymous/ComfyUI), [Example Workflow](https://comfyanonymous.github.io/ComfyUI_examples/sd3/)
- **Huggingface Space:** [Space](https://huggingface.co/spaces/stabilityai/stable-diffusion-3.5-medium)
- **Diffusers**: [See below](#using-with-diffusers).
- **GitHub**: [GitHub](https://github.com/Stability-AI/sd3.5).
- **API Endpoints:**
- [Stability AI API](https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/post)
### Implementation Details
- **MMDiT-X:** Introduces self-attention modules in the first 13 layers of the transformer, enhancing multi-resolution generation and overall image coherence.
- **QK Normalization:** Implements the QK normalization technique to improve training Stability.
- **Mixed-Resolution Training:**
- Progressive training stages: 256 → 512 → 768 → 1024 → 1440 resolution
- The final stage included mixed-scale image training to boost multi-resolution generation performance
- Extended positional embedding space to 384x384 (latent) at lower resolution stages
- Employed random crop augmentation on positional embeddings to enhance transformer layer robustness across the entire range of mixed resolutions and aspect ratios. For example, given a 64x64 latent image, we add a randomly cropped 64x64 embedding from the 192x192 embedding space during training as the input to the x stream.
These enhancements collectively contribute to the model's improved performance in multi-resolution image generation, coherence, and adaptability across various text-to-image tasks.
- **Text Encoders:**
- CLIPs: [OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip), [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main), context length 77 tokens
- T5: [T5-xxl](https://huggingface.co/google/t5-v1_1-xxl), context length 77/256 tokens at different stages of training
- **Training Data and Strategy:**
This model was trained on a wide variety of data, including synthetic data and filtered publicly available data.
For more technical details of the original MMDiT architecture, please refer to the [Research paper](https://stability.ai/news/stable-diffusion-3-research-paper).
### Usage & Limitations
- While this model can handle long prompts, you may observe artifacts on the edge of generations when T5 tokens go over 256. Pay attention to the token limits when using this model in your workflow, and shortern prompts if artifacts becomes too obvious.
- The medium model has a different training data distribution than the large model, so it may not respond to the same prompt similarly.
- We recommend sampling with **[Skip Layer Guidance](https://github.com/comfyanonymous/ComfyUI/pull/5404)** for better structure and anatomy coherency.
### Model Performance
See [blog](https://stability.ai/news/introducing-stable-diffusion-3-5) for our study about comparative performance in prompt adherence and aesthetic quality.
## File Structure
Click here to access the [Files and versions tab](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium/tree/main)
```│
├── text_encoders/
│ ├── README.md
│ ├── clip_g.safetensors
│ ├── clip_l.safetensors
│ ├── t5xxl_fp16.safetensors
│ └── t5xxl_fp8_e4m3fn.safetensors
│
├── README.md
├── LICENSE
├── sd3.5_medium.safetensors
├── SD3.5M_example_workflow.json
├── SD3.5M_SLG_example_workflow.json
└── sd3_medium_demo.jpg
** File structure below is for diffusers integration**
├── scheduler/
├── text_encoder/
├── text_encoder_2/
├── text_encoder_3/
├── tokenizer/
├── tokenizer_2/
├── tokenizer_3/
├── transformer/
├── vae/
└── model_index.json
```
## Using with Diffusers
Upgrade to the latest version of the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```
pip install -U diffusers
```
and then you can run
```py
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-medium", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")
image = pipe(
"A capybara holding a sign that reads Hello World",
num_inference_steps=40,
guidance_scale=4.5,
).images[0]
image.save("capybara.png")
```
### Quantizing the model with diffusers
Reduce your VRAM usage and have the model fit on 🤏 VRAM GPUs
```
pip install bitsandbytes
```
```py
from diffusers import BitsAndBytesConfig, SD3Transformer2DModel
from diffusers import StableDiffusion3Pipeline
import torch
model_id = "stabilityai/stable-diffusion-3.5-medium"
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_nf4 = SD3Transformer2DModel.from_pretrained(
model_id,
subfolder="transformer",
quantization_config=nf4_config,
torch_dtype=torch.bfloat16
)
pipeline = StableDiffusion3Pipeline.from_pretrained(
model_id,
transformer=model_nf4,
torch_dtype=torch.bfloat16
)
pipeline.enable_model_cpu_offload()
prompt = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. It features the distinctive, bulky body shape of a hippo. However, instead of the usual grey skin, the creature's body resembles a golden-brown, crispy waffle fresh off the griddle. The skin is textured with the familiar grid pattern of a waffle, each square filled with a glistening sheen of syrup. The environment combines the natural habitat of a hippo with elements of a breakfast table setting, a river of warm, melted butter, with oversized utensils or plates peeking out from the lush, pancake-like foliage in the background, a towering pepper mill standing in for a tree. As the sun rises in this fantastical world, it casts a warm, buttery glow over the scene. The creature, content in its butter river, lets out a yawn. Nearby, a flock of birds take flight"
image = pipeline(
prompt=prompt,
num_inference_steps=40,
guidance_scale=4.5,
max_sequence_length=512,
).images[0]
image.save("whimsical.png")
```
### Fine-tuning
Please see the fine-tuning guide [here](https://stabilityai.notion.site/Stable-Diffusion-3-5-Large-Fine-tuning-Tutorial-11a61cdcd1968027a15bdbd7c40be8c6).
## Uses
### Intended Uses
Intended uses include the following:
* Generation of artworks and use in design and other artistic processes.
* Applications in educational or creative tools.
* Research on generative models, including understanding the limitations of generative models.
All uses of the model must be in accordance with our [Acceptable Use Policy](https://stability.ai/use-policy).
### Out-of-Scope Uses
The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model.
## Safety
As part of our safety-by-design and responsible AI deployment approach, we take deliberate measures to ensure Integrity starts at the early stages of development. We implement safety measures throughout the development of our models. We have implemented safety mitigations that are intended to reduce the risk of certain harms, however we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases.
For more about our approach to Safety, please visit our [Safety page](https://stability.ai/safety).
### Integrity Evaluation
Our integrity evaluation methods include structured evaluations and red-teaming testing for certain harms. Testing was conducted primarily in English and may not cover all possible harms.
### Risks identified and mitigations:
* Harmful content: We have used filtered data sets when training our models and implemented safeguards that attempt to strike the right balance between usefulness and preventing harm. However, this does not guarantee that all possible harmful content has been removed. TAll developers and deployers should exercise caution and implement content safety guardrails based on their specific product policies and application use cases.
* Misuse: Technical limitations and developer and end-user education can help mitigate against malicious applications of models. All users are required to adhere to our [Acceptable Use Policy](https://stability.ai/use-policy), including when applying fine-tuning and prompt engineering mechanisms. Please reference the Stability AI Acceptable Use Policy for information on violative uses of our products.
* Privacy violations: Developers and deployers are encouraged to adhere to privacy regulations with techniques that respect data privacy.
### Contact
Please report any issues with the model or contact us:
* Safety issues: [email protected]
* Security issues: [email protected]
* Privacy issues: [email protected]
* License and general: https://stability.ai/license
* Enterprise license: https://stability.ai/enterprise
|
xavisgg/q-FrozenLake-v1-4x4-noSlippery | xavisgg | "2023-01-09T10:49:47Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-09T10:49:43Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="xavisgg/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DucPTIT/VQA-BartPho | DucPTIT | "2025-03-26T03:27:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-26T03:26:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KyuC/whisper-tiny.ko-processor | KyuC | "2025-04-10T08:59:50Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-01-22T04:43:50Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Mylamoore040/Myla | Mylamoore040 | "2025-02-20T19:45:54Z" | 0 | 0 | diffusers | [
"diffusers",
"translation",
"en",
"dataset:open-thoughts/OpenThoughts-114k",
"dataset:open-r1/OpenR1-Math-220k",
"dataset:cognitivecomputations/dolphin-r1",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:finetune:deepseek-ai/DeepSeek-R1",
"license:bigcode-openrail-m",
"region:us"
] | translation | "2025-02-20T19:42:51Z" | ---
license: bigcode-openrail-m
datasets:
- open-thoughts/OpenThoughts-114k
- open-r1/OpenR1-Math-220k
- cognitivecomputations/dolphin-r1
language:
- en
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1
new_version: deepseek-ai/DeepSeek-R1
pipeline_tag: translation
library_name: diffusers
--- |
sanchit-gandhi/whisper-small-kab-1k-steps | sanchit-gandhi | "2022-12-11T14:43:28Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ka",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-11T11:22:40Z" | ---
language:
- ka
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Georgian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 kab
type: mozilla-foundation/common_voice_11_0
config: kab
split: test
args: kab
metrics:
- name: Wer
type: wer
value: 53.84203447245193
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Georgian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 kab dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6125
- Wer: 53.8420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5555 | 1.06 | 1000 | 0.6125 | 53.8420 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221210+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
medmekk/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit | medmekk | "2025-03-17T22:27:21Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"base_model:medmekk/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit",
"base_model:quantized:medmekk/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-17T22:27:06Z" | ---
base_model:
- medmekk/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit
---
# medmekk/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit (Quantized)
## Description
This model is a quantized version of the original model `medmekk/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit`. It has been quantized using int4 quantization with bitsandbytes.
## Quantization Details
- **Quantization Type**: int4
- **bnb_4bit_quant_type**: nf4
- **bnb_4bit_use_double_quant**: True
- **bnb_4bit_compute_dtype**: bfloat16
- **bnb_4bit_quant_storage**: uint8
## Usage
You can use this model in your applications by loading it directly from the Hugging Face Hub:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("medmekk/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit") |
Makkoen/whisper-large-cit-do1.5-wd1e-3-lr5 | Makkoen | "2024-05-27T16:01:43Z" | 124 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-27T16:00:14Z" | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-cit-do1.5-wd1e-3-lr5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-cit-do1.5-wd1e-3-lr5
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the SF 200 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8623
- Wer: 27.9176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.9999 | 0.8889 | 10 | 0.8228 | 34.3249 |
| 0.7031 | 1.7778 | 20 | 0.6328 | 32.2654 |
| 0.4625 | 2.6667 | 30 | 0.5498 | 30.4348 |
| 0.2785 | 3.5556 | 40 | 0.5278 | 32.2654 |
| 0.1827 | 4.4444 | 50 | 0.5557 | 28.6041 |
| 0.1029 | 5.3333 | 60 | 0.6138 | 28.3753 |
| 0.06 | 6.2222 | 70 | 0.6641 | 29.7483 |
| 0.0266 | 7.1111 | 80 | 0.7666 | 29.0618 |
| 0.0229 | 8.0 | 90 | 0.7114 | 29.9771 |
| 0.0143 | 8.8889 | 100 | 0.7417 | 27.0023 |
| 0.0183 | 9.7778 | 110 | 0.8423 | 30.8924 |
| 0.0115 | 10.6667 | 120 | 0.7061 | 29.0618 |
| 0.0091 | 11.5556 | 130 | 0.7661 | 28.8330 |
| 0.0029 | 12.4444 | 140 | 0.8232 | 28.1465 |
| 0.0064 | 13.3333 | 150 | 0.8213 | 29.5195 |
| 0.0032 | 14.2222 | 160 | 0.8389 | 27.6888 |
| 0.0021 | 15.1111 | 170 | 0.8511 | 28.3753 |
| 0.0023 | 16.0 | 180 | 0.8545 | 28.3753 |
| 0.0015 | 16.8889 | 190 | 0.8599 | 28.1465 |
| 0.0013 | 17.7778 | 200 | 0.8623 | 27.9176 |
### Framework versions
- Transformers 4.41.1
- Pytorch 1.13.1+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Kuongan/fold_3_model_roberta | Kuongan | "2025-01-12T04:00:46Z" | 22 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-12T02:46:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesserfield/CGPT-lora | lesserfield | "2024-06-15T04:05:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3",
"base_model:finetune:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-15T04:05:24Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
---
# Uploaded model
- **Developed by:** lesserfield
- **License:** apache-2.0
- **Finetuned from model :** failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso11/f9ecdb5d-33e1-4f63-884e-e85b2a9553e6 | lesso11 | "2025-04-14T15:20:24Z" | 0 | 0 | null | [
"safetensors",
"gemma",
"region:us"
] | null | "2025-04-14T14:14:28Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
UWNSL/DeepSeek-R1-Distill-Qwen-7B-SafeChain | UWNSL | "2025-04-02T21:53:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:UWNSL/SafeChain",
"arxiv:2502.12025",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T21:38:01Z" | ---
library_name: transformers
datasets:
- UWNSL/SafeChain
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
---
**Check out details on our [project page](https://safe-chain.github.io/), [source code repo](https://github.com/uw-nsl/safechain), and [paper](https://arxiv.org/pdf/2502.12025)**
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{jiang2025safechain,
title={SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities},
author={Jiang, Fengqing and Xu, Zhangchen and Li, Yuetai and Niu, Luyao and Xiang, Zhen and Li, Bo and Lin, Bill Yuchen and Poovendran, Radha},
journal={arXiv preprint arXiv:2502.12025},
year={2025}
}
```
|
mlfoundations-dev/10k_globalbatchsize64_lr4e5_epochs3 | mlfoundations-dev | "2025-03-27T16:40:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-27T13:47:35Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: 10k_globalbatchsize64_lr4e5_epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 10k_globalbatchsize64_lr4e5_epochs3
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts_10000 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Zhang199/TinyLLaVA-Video-Phi2-Naive-16-512 | Zhang199 | "2025-04-14T06:38:35Z" | 8 | 0 | null | [
"safetensors",
"tinyllava",
"video-text-to-text",
"arxiv:2501.15513",
"license:apache-2.0",
"region:us"
] | video-text-to-text | "2025-01-20T09:01:04Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
MayBashendy/ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4 | MayBashendy | "2024-11-06T16:17:51Z" | 161 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-06T15:44:54Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4950
- Qwk: 0.6411
- Mse: 0.4950
- Rmse: 0.7035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 0.0063 | 2 | 10.1861 | 0.0 | 10.1861 | 3.1916 |
| No log | 0.0126 | 4 | 8.5953 | -0.0005 | 8.5953 | 2.9318 |
| No log | 0.0189 | 6 | 6.9159 | 0.0051 | 6.9159 | 2.6298 |
| No log | 0.0252 | 8 | 5.5130 | 0.0037 | 5.5130 | 2.3480 |
| No log | 0.0315 | 10 | 4.3816 | 0.0018 | 4.3816 | 2.0932 |
| No log | 0.0379 | 12 | 3.5082 | 0.0492 | 3.5082 | 1.8730 |
| No log | 0.0442 | 14 | 2.7686 | 0.0128 | 2.7686 | 1.6639 |
| No log | 0.0505 | 16 | 2.1322 | 0.0118 | 2.1322 | 1.4602 |
| No log | 0.0568 | 18 | 1.6261 | 0.0079 | 1.6261 | 1.2752 |
| No log | 0.0631 | 20 | 1.2562 | 0.1722 | 1.2562 | 1.1208 |
| No log | 0.0694 | 22 | 1.0333 | 0.0420 | 1.0333 | 1.0165 |
| No log | 0.0757 | 24 | 0.8915 | 0.0316 | 0.8915 | 0.9442 |
| No log | 0.0820 | 26 | 0.8074 | 0.0316 | 0.8074 | 0.8986 |
| No log | 0.0883 | 28 | 0.7660 | 0.0316 | 0.7660 | 0.8752 |
| No log | 0.0946 | 30 | 0.7689 | 0.0542 | 0.7689 | 0.8769 |
| No log | 0.1009 | 32 | 0.9386 | 0.0937 | 0.9386 | 0.9688 |
| No log | 0.1073 | 34 | 0.8347 | 0.0771 | 0.8347 | 0.9136 |
| No log | 0.1136 | 36 | 0.8293 | 0.4385 | 0.8293 | 0.9106 |
| No log | 0.1199 | 38 | 0.8916 | 0.3628 | 0.8916 | 0.9442 |
| No log | 0.1262 | 40 | 0.8068 | 0.0212 | 0.8068 | 0.8982 |
| No log | 0.1325 | 42 | 0.8411 | 0.0344 | 0.8411 | 0.9171 |
| No log | 0.1388 | 44 | 0.8499 | 0.0344 | 0.8499 | 0.9219 |
| No log | 0.1451 | 46 | 0.8047 | 0.0107 | 0.8047 | 0.8970 |
| No log | 0.1514 | 48 | 0.7906 | 0.0107 | 0.7906 | 0.8892 |
| No log | 0.1577 | 50 | 0.7428 | 0.0317 | 0.7428 | 0.8619 |
| No log | 0.1640 | 52 | 0.7615 | 0.0511 | 0.7615 | 0.8726 |
| No log | 0.1703 | 54 | 0.7432 | 0.0792 | 0.7432 | 0.8621 |
| No log | 0.1767 | 56 | 0.6753 | 0.0610 | 0.6753 | 0.8218 |
| No log | 0.1830 | 58 | 0.6924 | 0.0317 | 0.6924 | 0.8321 |
| No log | 0.1893 | 60 | 0.7336 | 0.0730 | 0.7336 | 0.8565 |
| No log | 0.1956 | 62 | 0.7216 | 0.0213 | 0.7216 | 0.8495 |
| No log | 0.2019 | 64 | 0.6734 | 0.0826 | 0.6734 | 0.8206 |
| No log | 0.2082 | 66 | 0.8115 | 0.1971 | 0.8115 | 0.9008 |
| No log | 0.2145 | 68 | 1.0608 | 0.2342 | 1.0608 | 1.0300 |
| No log | 0.2208 | 70 | 0.8848 | 0.2293 | 0.8848 | 0.9406 |
| No log | 0.2271 | 72 | 0.6445 | 0.1331 | 0.6445 | 0.8028 |
| No log | 0.2334 | 74 | 0.6672 | 0.0803 | 0.6672 | 0.8168 |
| No log | 0.2397 | 76 | 0.6616 | 0.0754 | 0.6616 | 0.8134 |
| No log | 0.2461 | 78 | 0.6149 | 0.1067 | 0.6149 | 0.7842 |
| No log | 0.2524 | 80 | 0.6896 | 0.1973 | 0.6896 | 0.8304 |
| No log | 0.2587 | 82 | 0.7505 | 0.2167 | 0.7505 | 0.8663 |
| No log | 0.2650 | 84 | 0.6389 | 0.1883 | 0.6389 | 0.7993 |
| No log | 0.2713 | 86 | 0.6107 | 0.2957 | 0.6107 | 0.7815 |
| No log | 0.2776 | 88 | 0.6234 | 0.3088 | 0.6234 | 0.7895 |
| No log | 0.2839 | 90 | 0.5901 | 0.2657 | 0.5901 | 0.7681 |
| No log | 0.2902 | 92 | 0.6248 | 0.1786 | 0.6248 | 0.7905 |
| No log | 0.2965 | 94 | 0.6419 | 0.2214 | 0.6419 | 0.8012 |
| No log | 0.3028 | 96 | 0.5860 | 0.2699 | 0.5860 | 0.7655 |
| No log | 0.3091 | 98 | 0.5766 | 0.2956 | 0.5766 | 0.7593 |
| No log | 0.3155 | 100 | 0.5547 | 0.3623 | 0.5547 | 0.7448 |
| No log | 0.3218 | 102 | 0.5514 | 0.4222 | 0.5514 | 0.7426 |
| No log | 0.3281 | 104 | 0.5460 | 0.4061 | 0.5460 | 0.7389 |
| No log | 0.3344 | 106 | 0.5756 | 0.3134 | 0.5756 | 0.7587 |
| No log | 0.3407 | 108 | 0.6144 | 0.3095 | 0.6144 | 0.7838 |
| No log | 0.3470 | 110 | 0.5301 | 0.4421 | 0.5301 | 0.7280 |
| No log | 0.3533 | 112 | 0.5429 | 0.4684 | 0.5429 | 0.7368 |
| No log | 0.3596 | 114 | 0.5177 | 0.4759 | 0.5177 | 0.7195 |
| No log | 0.3659 | 116 | 0.5241 | 0.4151 | 0.5241 | 0.7240 |
| No log | 0.3722 | 118 | 0.5069 | 0.4161 | 0.5069 | 0.7120 |
| No log | 0.3785 | 120 | 0.5293 | 0.4872 | 0.5293 | 0.7275 |
| No log | 0.3849 | 122 | 0.5688 | 0.4517 | 0.5688 | 0.7542 |
| No log | 0.3912 | 124 | 0.5780 | 0.2445 | 0.5780 | 0.7603 |
| No log | 0.3975 | 126 | 0.5334 | 0.4100 | 0.5334 | 0.7304 |
| No log | 0.4038 | 128 | 0.5552 | 0.5686 | 0.5552 | 0.7451 |
| No log | 0.4101 | 130 | 0.5369 | 0.5723 | 0.5369 | 0.7327 |
| No log | 0.4164 | 132 | 0.5145 | 0.3755 | 0.5145 | 0.7173 |
| No log | 0.4227 | 134 | 0.5181 | 0.4368 | 0.5181 | 0.7198 |
| No log | 0.4290 | 136 | 0.5175 | 0.4105 | 0.5175 | 0.7194 |
| No log | 0.4353 | 138 | 0.5481 | 0.5205 | 0.5481 | 0.7403 |
| No log | 0.4416 | 140 | 0.5561 | 0.4941 | 0.5561 | 0.7457 |
| No log | 0.4479 | 142 | 0.5308 | 0.5019 | 0.5308 | 0.7286 |
| No log | 0.4543 | 144 | 0.5421 | 0.4929 | 0.5421 | 0.7363 |
| No log | 0.4606 | 146 | 0.5182 | 0.4383 | 0.5182 | 0.7198 |
| No log | 0.4669 | 148 | 0.5113 | 0.4444 | 0.5113 | 0.7151 |
| No log | 0.4732 | 150 | 0.5292 | 0.3937 | 0.5292 | 0.7275 |
| No log | 0.4795 | 152 | 0.5153 | 0.4278 | 0.5153 | 0.7179 |
| No log | 0.4858 | 154 | 0.4959 | 0.4610 | 0.4959 | 0.7042 |
| No log | 0.4921 | 156 | 0.4822 | 0.4742 | 0.4822 | 0.6944 |
| No log | 0.4984 | 158 | 0.5207 | 0.5700 | 0.5207 | 0.7216 |
| No log | 0.5047 | 160 | 0.6361 | 0.5602 | 0.6361 | 0.7976 |
| No log | 0.5110 | 162 | 0.5405 | 0.5354 | 0.5405 | 0.7352 |
| No log | 0.5174 | 164 | 0.5536 | 0.5347 | 0.5536 | 0.7440 |
| No log | 0.5237 | 166 | 0.5308 | 0.5142 | 0.5308 | 0.7285 |
| No log | 0.5300 | 168 | 0.5827 | 0.5080 | 0.5827 | 0.7634 |
| No log | 0.5363 | 170 | 0.6033 | 0.5139 | 0.6033 | 0.7767 |
| No log | 0.5426 | 172 | 0.7514 | 0.5038 | 0.7514 | 0.8669 |
| No log | 0.5489 | 174 | 0.7327 | 0.5197 | 0.7327 | 0.8560 |
| No log | 0.5552 | 176 | 0.5563 | 0.5225 | 0.5563 | 0.7459 |
| No log | 0.5615 | 178 | 0.5157 | 0.4842 | 0.5157 | 0.7181 |
| No log | 0.5678 | 180 | 0.5430 | 0.5432 | 0.5430 | 0.7369 |
| No log | 0.5741 | 182 | 0.5386 | 0.5786 | 0.5386 | 0.7339 |
| No log | 0.5804 | 184 | 0.4900 | 0.5768 | 0.4900 | 0.7000 |
| No log | 0.5868 | 186 | 0.5030 | 0.5908 | 0.5030 | 0.7092 |
| No log | 0.5931 | 188 | 0.4526 | 0.5804 | 0.4526 | 0.6728 |
| No log | 0.5994 | 190 | 0.5105 | 0.4823 | 0.5105 | 0.7145 |
| No log | 0.6057 | 192 | 0.5870 | 0.4220 | 0.5870 | 0.7662 |
| No log | 0.6120 | 194 | 0.5511 | 0.4319 | 0.5511 | 0.7423 |
| No log | 0.6183 | 196 | 0.4500 | 0.5472 | 0.4500 | 0.6708 |
| No log | 0.6246 | 198 | 0.4526 | 0.5562 | 0.4526 | 0.6728 |
| No log | 0.6309 | 200 | 0.5135 | 0.5754 | 0.5135 | 0.7166 |
| No log | 0.6372 | 202 | 0.6373 | 0.5419 | 0.6373 | 0.7983 |
| No log | 0.6435 | 204 | 0.5640 | 0.5393 | 0.5640 | 0.7510 |
| No log | 0.6498 | 206 | 0.5375 | 0.5351 | 0.5375 | 0.7332 |
| No log | 0.6562 | 208 | 0.5511 | 0.5560 | 0.5511 | 0.7423 |
| No log | 0.6625 | 210 | 0.5414 | 0.5693 | 0.5414 | 0.7358 |
| No log | 0.6688 | 212 | 0.5304 | 0.5811 | 0.5304 | 0.7283 |
| No log | 0.6751 | 214 | 0.4758 | 0.5939 | 0.4758 | 0.6898 |
| No log | 0.6814 | 216 | 0.4437 | 0.5481 | 0.4437 | 0.6661 |
| No log | 0.6877 | 218 | 0.4368 | 0.5673 | 0.4368 | 0.6609 |
| No log | 0.6940 | 220 | 0.4946 | 0.6281 | 0.4946 | 0.7033 |
| No log | 0.7003 | 222 | 0.4564 | 0.5958 | 0.4564 | 0.6756 |
| No log | 0.7066 | 224 | 0.4662 | 0.5795 | 0.4662 | 0.6828 |
| No log | 0.7129 | 226 | 0.5187 | 0.6018 | 0.5187 | 0.7202 |
| No log | 0.7192 | 228 | 0.5179 | 0.6018 | 0.5179 | 0.7196 |
| No log | 0.7256 | 230 | 0.4883 | 0.6011 | 0.4883 | 0.6988 |
| No log | 0.7319 | 232 | 0.4581 | 0.5898 | 0.4581 | 0.6768 |
| No log | 0.7382 | 234 | 0.5164 | 0.6064 | 0.5164 | 0.7186 |
| No log | 0.7445 | 236 | 0.4880 | 0.6120 | 0.4880 | 0.6986 |
| No log | 0.7508 | 238 | 0.4608 | 0.6049 | 0.4608 | 0.6788 |
| No log | 0.7571 | 240 | 0.5627 | 0.6490 | 0.5627 | 0.7502 |
| No log | 0.7634 | 242 | 0.8123 | 0.6725 | 0.8123 | 0.9013 |
| No log | 0.7697 | 244 | 0.6433 | 0.6624 | 0.6433 | 0.8021 |
| No log | 0.7760 | 246 | 0.4387 | 0.5914 | 0.4387 | 0.6624 |
| No log | 0.7823 | 248 | 0.4507 | 0.5951 | 0.4507 | 0.6713 |
| No log | 0.7886 | 250 | 0.6574 | 0.6299 | 0.6574 | 0.8108 |
| No log | 0.7950 | 252 | 0.9073 | 0.5748 | 0.9073 | 0.9525 |
| No log | 0.8013 | 254 | 0.7567 | 0.5976 | 0.7567 | 0.8699 |
| No log | 0.8076 | 256 | 0.4780 | 0.5993 | 0.4780 | 0.6914 |
| No log | 0.8139 | 258 | 0.4653 | 0.4804 | 0.4653 | 0.6821 |
| No log | 0.8202 | 260 | 0.4593 | 0.5099 | 0.4593 | 0.6777 |
| No log | 0.8265 | 262 | 0.5150 | 0.5981 | 0.5150 | 0.7176 |
| No log | 0.8328 | 264 | 0.7188 | 0.5631 | 0.7188 | 0.8478 |
| No log | 0.8391 | 266 | 0.6870 | 0.5665 | 0.6870 | 0.8289 |
| No log | 0.8454 | 268 | 0.5103 | 0.6082 | 0.5103 | 0.7144 |
| No log | 0.8517 | 270 | 0.4610 | 0.4952 | 0.4610 | 0.6790 |
| No log | 0.8580 | 272 | 0.5092 | 0.4066 | 0.5092 | 0.7136 |
| No log | 0.8644 | 274 | 0.4640 | 0.4861 | 0.4640 | 0.6812 |
| No log | 0.8707 | 276 | 0.4945 | 0.5916 | 0.4945 | 0.7032 |
| No log | 0.8770 | 278 | 0.6582 | 0.5572 | 0.6582 | 0.8113 |
| No log | 0.8833 | 280 | 0.6694 | 0.5610 | 0.6694 | 0.8181 |
| No log | 0.8896 | 282 | 0.5728 | 0.5254 | 0.5728 | 0.7568 |
| No log | 0.8959 | 284 | 0.5221 | 0.4152 | 0.5221 | 0.7226 |
| No log | 0.9022 | 286 | 0.4807 | 0.4751 | 0.4807 | 0.6933 |
| No log | 0.9085 | 288 | 0.4549 | 0.5473 | 0.4549 | 0.6745 |
| No log | 0.9148 | 290 | 0.4556 | 0.5597 | 0.4556 | 0.6750 |
| No log | 0.9211 | 292 | 0.4582 | 0.5556 | 0.4582 | 0.6769 |
| No log | 0.9274 | 294 | 0.4645 | 0.5505 | 0.4645 | 0.6816 |
| No log | 0.9338 | 296 | 0.4678 | 0.5381 | 0.4678 | 0.6840 |
| No log | 0.9401 | 298 | 0.4749 | 0.5534 | 0.4749 | 0.6892 |
| No log | 0.9464 | 300 | 0.5625 | 0.5975 | 0.5625 | 0.7500 |
| No log | 0.9527 | 302 | 0.5900 | 0.5826 | 0.5900 | 0.7681 |
| No log | 0.9590 | 304 | 0.4926 | 0.5950 | 0.4926 | 0.7019 |
| No log | 0.9653 | 306 | 0.4816 | 0.4778 | 0.4816 | 0.6940 |
| No log | 0.9716 | 308 | 0.4785 | 0.5246 | 0.4785 | 0.6917 |
| No log | 0.9779 | 310 | 0.4967 | 0.5915 | 0.4967 | 0.7048 |
| No log | 0.9842 | 312 | 0.4777 | 0.5359 | 0.4777 | 0.6912 |
| No log | 0.9905 | 314 | 0.5052 | 0.4469 | 0.5052 | 0.7108 |
| No log | 0.9968 | 316 | 0.4870 | 0.4692 | 0.4870 | 0.6978 |
| No log | 1.0032 | 318 | 0.4959 | 0.6014 | 0.4959 | 0.7042 |
| No log | 1.0095 | 320 | 0.5971 | 0.6622 | 0.5971 | 0.7727 |
| No log | 1.0158 | 322 | 0.6224 | 0.6527 | 0.6224 | 0.7889 |
| No log | 1.0221 | 324 | 0.5090 | 0.6125 | 0.5090 | 0.7134 |
| No log | 1.0284 | 326 | 0.4859 | 0.6161 | 0.4859 | 0.6970 |
| No log | 1.0347 | 328 | 0.5575 | 0.6373 | 0.5575 | 0.7466 |
| No log | 1.0410 | 330 | 0.6631 | 0.6354 | 0.6631 | 0.8143 |
| No log | 1.0473 | 332 | 0.7880 | 0.6128 | 0.7880 | 0.8877 |
| No log | 1.0536 | 334 | 0.6328 | 0.6471 | 0.6328 | 0.7955 |
| No log | 1.0599 | 336 | 0.4833 | 0.5926 | 0.4833 | 0.6952 |
| No log | 1.0662 | 338 | 0.4764 | 0.5915 | 0.4764 | 0.6902 |
| No log | 1.0726 | 340 | 0.4879 | 0.6097 | 0.4879 | 0.6985 |
| No log | 1.0789 | 342 | 0.5004 | 0.6328 | 0.5004 | 0.7074 |
| No log | 1.0852 | 344 | 0.4558 | 0.5696 | 0.4558 | 0.6752 |
| No log | 1.0915 | 346 | 0.4638 | 0.5143 | 0.4638 | 0.6811 |
| No log | 1.0978 | 348 | 0.4590 | 0.5340 | 0.4590 | 0.6775 |
| No log | 1.1041 | 350 | 0.4556 | 0.5999 | 0.4556 | 0.6750 |
| No log | 1.1104 | 352 | 0.4521 | 0.5984 | 0.4521 | 0.6724 |
| No log | 1.1167 | 354 | 0.4603 | 0.5902 | 0.4603 | 0.6784 |
| No log | 1.1230 | 356 | 0.5085 | 0.6098 | 0.5085 | 0.7131 |
| No log | 1.1293 | 358 | 0.5851 | 0.6319 | 0.5851 | 0.7649 |
| No log | 1.1356 | 360 | 0.5377 | 0.6091 | 0.5377 | 0.7333 |
| No log | 1.1420 | 362 | 0.4673 | 0.5626 | 0.4673 | 0.6836 |
| No log | 1.1483 | 364 | 0.4611 | 0.5643 | 0.4611 | 0.6790 |
| No log | 1.1546 | 366 | 0.4560 | 0.5333 | 0.4560 | 0.6753 |
| No log | 1.1609 | 368 | 0.4761 | 0.4842 | 0.4761 | 0.6900 |
| No log | 1.1672 | 370 | 0.4581 | 0.5306 | 0.4581 | 0.6768 |
| No log | 1.1735 | 372 | 0.4492 | 0.5837 | 0.4492 | 0.6702 |
| No log | 1.1798 | 374 | 0.4585 | 0.6097 | 0.4585 | 0.6771 |
| No log | 1.1861 | 376 | 0.4451 | 0.5503 | 0.4451 | 0.6672 |
| No log | 1.1924 | 378 | 0.4524 | 0.5227 | 0.4524 | 0.6726 |
| No log | 1.1987 | 380 | 0.4546 | 0.5008 | 0.4546 | 0.6742 |
| No log | 1.2050 | 382 | 0.4735 | 0.5442 | 0.4735 | 0.6881 |
| No log | 1.2114 | 384 | 0.5067 | 0.5698 | 0.5067 | 0.7118 |
| No log | 1.2177 | 386 | 0.4892 | 0.4913 | 0.4892 | 0.6994 |
| No log | 1.2240 | 388 | 0.4975 | 0.5099 | 0.4975 | 0.7053 |
| No log | 1.2303 | 390 | 0.6492 | 0.6296 | 0.6492 | 0.8057 |
| No log | 1.2366 | 392 | 0.7328 | 0.6114 | 0.7328 | 0.8561 |
| No log | 1.2429 | 394 | 0.5539 | 0.6157 | 0.5539 | 0.7443 |
| No log | 1.2492 | 396 | 0.5265 | 0.4173 | 0.5265 | 0.7256 |
| No log | 1.2555 | 398 | 0.6128 | 0.3532 | 0.6128 | 0.7828 |
| No log | 1.2618 | 400 | 0.5354 | 0.4003 | 0.5354 | 0.7317 |
| No log | 1.2681 | 402 | 0.4935 | 0.5464 | 0.4935 | 0.7025 |
| No log | 1.2744 | 404 | 0.5745 | 0.6324 | 0.5745 | 0.7579 |
| No log | 1.2808 | 406 | 0.5167 | 0.6236 | 0.5167 | 0.7188 |
| No log | 1.2871 | 408 | 0.4620 | 0.5427 | 0.4620 | 0.6797 |
| No log | 1.2934 | 410 | 0.4585 | 0.5055 | 0.4585 | 0.6772 |
| No log | 1.2997 | 412 | 0.4691 | 0.5926 | 0.4691 | 0.6849 |
| No log | 1.3060 | 414 | 0.5962 | 0.6760 | 0.5962 | 0.7722 |
| No log | 1.3123 | 416 | 0.5452 | 0.6593 | 0.5452 | 0.7384 |
| No log | 1.3186 | 418 | 0.4661 | 0.6018 | 0.4661 | 0.6827 |
| No log | 1.3249 | 420 | 0.4503 | 0.5347 | 0.4503 | 0.6710 |
| No log | 1.3312 | 422 | 0.4594 | 0.5752 | 0.4594 | 0.6778 |
| No log | 1.3375 | 424 | 0.5623 | 0.6484 | 0.5623 | 0.7499 |
| No log | 1.3438 | 426 | 0.5562 | 0.6429 | 0.5562 | 0.7458 |
| No log | 1.3502 | 428 | 0.4545 | 0.5922 | 0.4545 | 0.6742 |
| No log | 1.3565 | 430 | 0.4446 | 0.5818 | 0.4446 | 0.6668 |
| No log | 1.3628 | 432 | 0.5001 | 0.6472 | 0.5001 | 0.7072 |
| No log | 1.3691 | 434 | 0.5172 | 0.6548 | 0.5172 | 0.7192 |
| No log | 1.3754 | 436 | 0.4511 | 0.5994 | 0.4511 | 0.6716 |
| No log | 1.3817 | 438 | 0.4721 | 0.5433 | 0.4721 | 0.6871 |
| No log | 1.3880 | 440 | 0.4686 | 0.6124 | 0.4686 | 0.6846 |
| No log | 1.3943 | 442 | 0.5272 | 0.6602 | 0.5272 | 0.7261 |
| No log | 1.4006 | 444 | 0.4777 | 0.6232 | 0.4777 | 0.6912 |
| No log | 1.4069 | 446 | 0.4745 | 0.4864 | 0.4745 | 0.6888 |
| No log | 1.4132 | 448 | 0.4813 | 0.4603 | 0.4813 | 0.6938 |
| No log | 1.4196 | 450 | 0.4566 | 0.5352 | 0.4566 | 0.6757 |
| No log | 1.4259 | 452 | 0.5087 | 0.6295 | 0.5087 | 0.7132 |
| No log | 1.4322 | 454 | 0.5272 | 0.6279 | 0.5272 | 0.7261 |
| No log | 1.4385 | 456 | 0.4695 | 0.5742 | 0.4695 | 0.6852 |
| No log | 1.4448 | 458 | 0.4613 | 0.5300 | 0.4613 | 0.6792 |
| No log | 1.4511 | 460 | 0.4807 | 0.4327 | 0.4807 | 0.6933 |
| No log | 1.4574 | 462 | 0.4712 | 0.4831 | 0.4712 | 0.6865 |
| No log | 1.4637 | 464 | 0.5262 | 0.6207 | 0.5262 | 0.7254 |
| No log | 1.4700 | 466 | 0.5679 | 0.6533 | 0.5679 | 0.7536 |
| No log | 1.4763 | 468 | 0.4943 | 0.6319 | 0.4943 | 0.7030 |
| No log | 1.4826 | 470 | 0.4548 | 0.5373 | 0.4548 | 0.6744 |
| No log | 1.4890 | 472 | 0.4529 | 0.5669 | 0.4529 | 0.6730 |
| No log | 1.4953 | 474 | 0.4979 | 0.6578 | 0.4979 | 0.7056 |
| No log | 1.5016 | 476 | 0.5480 | 0.6783 | 0.5480 | 0.7402 |
| No log | 1.5079 | 478 | 0.4760 | 0.5831 | 0.4760 | 0.6900 |
| No log | 1.5142 | 480 | 0.4790 | 0.4885 | 0.4790 | 0.6921 |
| No log | 1.5205 | 482 | 0.4733 | 0.4948 | 0.4733 | 0.6879 |
| No log | 1.5268 | 484 | 0.4930 | 0.6107 | 0.4930 | 0.7021 |
| No log | 1.5331 | 486 | 0.6387 | 0.6998 | 0.6387 | 0.7992 |
| No log | 1.5394 | 488 | 0.5770 | 0.6947 | 0.5770 | 0.7596 |
| No log | 1.5457 | 490 | 0.4507 | 0.5730 | 0.4507 | 0.6713 |
| No log | 1.5521 | 492 | 0.4761 | 0.4890 | 0.4761 | 0.6900 |
| No log | 1.5584 | 494 | 0.4524 | 0.5010 | 0.4524 | 0.6726 |
| No log | 1.5647 | 496 | 0.4512 | 0.5824 | 0.4512 | 0.6717 |
| No log | 1.5710 | 498 | 0.5386 | 0.6594 | 0.5386 | 0.7339 |
| 0.5 | 1.5773 | 500 | 0.5441 | 0.6588 | 0.5441 | 0.7376 |
| 0.5 | 1.5836 | 502 | 0.5217 | 0.6468 | 0.5217 | 0.7223 |
| 0.5 | 1.5899 | 504 | 0.4504 | 0.5555 | 0.4504 | 0.6711 |
| 0.5 | 1.5962 | 506 | 0.4459 | 0.5713 | 0.4459 | 0.6677 |
| 0.5 | 1.6025 | 508 | 0.4642 | 0.6069 | 0.4642 | 0.6813 |
| 0.5 | 1.6088 | 510 | 0.4950 | 0.6411 | 0.4950 | 0.7035 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Darkhn/UNNAMED-MODEL-A-70b-4.0bpw-h8-exl2 | Darkhn | "2025-04-01T20:48:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:TareksLab/Anathema-V2-LLaMA-70B",
"base_model:merge:TareksLab/Anathema-V2-LLaMA-70B",
"base_model:TareksLab/Erudite-V1-Unleashed-LLaMA-70B",
"base_model:merge:TareksLab/Erudite-V1-Unleashed-LLaMA-70B",
"base_model:TareksLab/RolePlayer-V4-LLaMa-70B",
"base_model:merge:TareksLab/RolePlayer-V4-LLaMa-70B",
"base_model:TareksLab/Scrivener-Base-V4-LLaMA-70B",
"base_model:merge:TareksLab/Scrivener-Base-V4-LLaMA-70B",
"base_model:TareksLab/Wordsmith-V2.0-LLaMa-70B",
"base_model:merge:TareksLab/Wordsmith-V2.0-LLaMa-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | "2025-04-01T20:22:58Z" | ---
base_model:
- TareksLab/Erudite-V1-Unleashed-LLaMA-70B
- TareksLab/RolePlayer-V4-LLaMa-70B
- TareksLab/Anathema-V2-LLaMA-70B
- TareksLab/Wordsmith-V2.0-LLaMa-70B
- TareksLab/Scrivener-Base-V4-LLaMA-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [TareksLab/Erudite-V1-Unleashed-LLaMA-70B](https://huggingface.co/TareksLab/Erudite-V1-Unleashed-LLaMA-70B) as a base.
### Models Merged
The following models were included in the merge:
* [TareksLab/RolePlayer-V4-LLaMa-70B](https://huggingface.co/TareksLab/RolePlayer-V4-LLaMa-70B)
* [TareksLab/Anathema-V2-LLaMA-70B](https://huggingface.co/TareksLab/Anathema-V2-LLaMA-70B)
* [TareksLab/Wordsmith-V2.0-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V2.0-LLaMa-70B)
* [TareksLab/Scrivener-Base-V4-LLaMA-70B](https://huggingface.co/TareksLab/Scrivener-Base-V4-LLaMA-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksLab/Wordsmith-V2.0-LLaMa-70B
- model: TareksLab/Anathema-V2-LLaMA-70B
- model: TareksLab/Scrivener-Base-V4-LLaMA-70B
- model: TareksLab/RolePlayer-V4-LLaMa-70B
merge_method: sce
base_model: TareksLab/Erudite-V1-Unleashed-LLaMA-70B
parameters:
select_topk: 0.16
dtype: bfloat16
tokenizer:
source: TareksLab/Scrivener-Base-V4-LLaMA-70B
```
|
inin007/llama2_uuu_news_qlora | inin007 | "2024-03-23T08:21:45Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | "2024-03-23T05:31:39Z" | ---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
Ayush-Singh/qwen0.5-small-sft | Ayush-Singh | "2025-01-02T23:18:48Z" | 145 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-02T23:17:45Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kartikgupta373/e6-ad15575-705533-black | kartikgupta373 | "2025-01-29T08:33:17Z" | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-29T08:33:04Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# E6 Ad15575 705533 Black
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/e6-ad15575-705533-black', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso02/ab47e7aa-52f7-4ed5-a27e-4a0ae3640745 | lesso02 | "2025-01-29T13:20:26Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-13b-hf-flash",
"region:us"
] | null | "2025-01-29T13:20:05Z" | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ab47e7aa-52f7-4ed5-a27e-4a0ae3640745
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-13b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 797cc1d62093d99a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/797cc1d62093d99a_train_data.json
type:
field_input: id
field_instruction: layer
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso02/ab47e7aa-52f7-4ed5-a27e-4a0ae3640745
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/797cc1d62093d99a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2a254a99-9440-4327-afbc-81e1939bd50d
wandb_project: multi
wandb_run: your_name
wandb_runid: 2a254a99-9440-4327-afbc-81e1939bd50d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ab47e7aa-52f7-4ed5-a27e-4a0ae3640745
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8651 | 1.0 | 1 | 0.7503 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/belet-mix-real-v20-sdxl | John6666 | "2025-03-21T06:59:28Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"Japanese",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-03-21T06:52:47Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- Japanese
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1360673?modelVersionId=1560023).
This model created by [AI_belet](https://civitai.com/user/AI_belet).
|
takesomerisks/qloraLlama213bTrain2 | takesomerisks | "2023-07-27T01:38:56Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-27T01:38:53Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF | mradermacher | "2024-12-16T00:23:17Z" | 593 | 1 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"en",
"dataset:pbevan11/multilingual-constitutional-preference-pairs-revision-only",
"dataset:pbevan11/ultrafeedback_binarized_multilingual",
"base_model:pbevan11/Mistral-Nemo-MCAI-SFT-DPO-revision-only",
"base_model:quantized:pbevan11/Mistral-Nemo-MCAI-SFT-DPO-revision-only",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-10-06T06:53:59Z" | ---
base_model: pbevan11/Mistral-Nemo-MCAI-SFT-DPO-revision-only
datasets:
- pbevan11/multilingual-constitutional-preference-pairs-revision-only
- pbevan11/ultrafeedback_binarized_multilingual
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pbevan11/Mistral-Nemo-MCAI-SFT-DPO-revision-only
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only-i1-GGUF/resolve/main/Mistral-Nemo-Instruct-MCAI-SFT-DPO-revision-only.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits