Datasets:
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 396
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sivan22/sefaria-ref-finder | sivan22 | "2024-01-11T17:03:34" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-11T16:59:27" | ---
title: Sefaria Ref Finder
emoji: 🐨
colorFrom: gray
colorTo: gray
sdk: streamlit
sdk_version: 1.29.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
mahiatlinux/MasherAI-7B-v0.9-GGUF | mahiatlinux | "2024-03-06T06:59:17" | 3 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:openchat/openchat-3.5-0106",
"base_model:quantized:openchat/openchat-3.5-0106",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-03-06T06:57:11" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: openchat/openchat-3.5-0106
---
# Uploaded model
- **Developed by:** mahiatlinux
- **License:** apache-2.0
- **Finetuned from model :** openchat/openchat-3.5-0106
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e7_s6789_v3_l5_v50 | KingKazma | "2023-08-09T15:56:16" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-09T15:56:15" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Superrrdamn/task-3-Qwen-Qwen2.5-7B-Instruct | Superrrdamn | "2025-02-12T04:59:17" | 194 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | "2025-01-31T22:25:18" | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_M-GGUF | Triangle104 | "2024-10-11T03:55:43" | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"base_model:quantized:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-11T03:54:21" | ---
base_model: nbeerbower/Hermes2-Gutenberg2-Mistral-7B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Hermes2-Gutenberg2-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 37.21
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 28.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.66
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.26
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 22.14
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
---
# Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Hermes2-Gutenberg2-Mistral-7B`](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B) for more details on the model.
---
Model details:
-
Hermes2-Gutenberg2-Mistral-7B
NousResearch/Hermes-2-Pro-Mistral-7B finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with 2x RTX 3090 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 19.35
IFEval (0-Shot) 37.21
BBH (3-Shot) 28.91
MATH Lvl 5 (4-Shot) 5.66
GPQA (0-shot) 5.26
MuSR (0-shot) 16.92
MMLU-PRO (5-shot) 22.14
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_M-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_M-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_M-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_M-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q4_k_m.gguf -c 2048
```
|
nadanainone/popnm | nadanainone | "2022-12-13T05:33:37" | 8 | 6 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"safetensors",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2022-11-14T09:01:35" | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- safetensors
inference: false
---
Model based on the art style from the rhythm game Pop n Music. Not 100% sure on this one but still gives decent results with the right settings depending on the prompt, but I couldn't tell you exactly which because I haven't gotten it entirely down.
Prompt is popnm
I claim no ownership over this, all rights belong to their respective owners.





 |
redstonehero/yiffymix_32 | redstonehero | "2023-08-09T08:51:06" | 21 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-09T08:16:53" | ---
license: creativeml-openrail-m
library_name: diffusers
--- |
greattkiffy/gemma-2-2B-it-thinking-function_calling-V0 | greattkiffy | "2025-02-20T04:30:33" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | "2025-02-20T04:28:37" | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="greattkiffy/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
morethiru/ring1 | morethiru | "2025-02-22T14:48:16" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-22T14:22:29" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ring1
---
# Ring1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ring1` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('morethiru/ring1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lmstudio-community/Llama-3.1-Tulu-3-405B-GGUF | lmstudio-community | "2025-01-30T22:14:16" | 769 | 2 | null | [
"gguf",
"text-generation",
"en",
"dataset:allenai/RLVR-MATH",
"arxiv:2411.15124",
"base_model:allenai/Llama-3.1-Tulu-3-405B",
"base_model:quantized:allenai/Llama-3.1-Tulu-3-405B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-30T15:56:08" | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: allenai/Llama-3.1-Tulu-3-405B
datasets:
- allenai/RLVR-MATH
language:
- en
license: llama3.1
---
## 💫 Community Model> Llama 3.1 Tulu 3 405B by Allenai
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [allenai](https://huggingface.co/allenai)<br>
**Original model**: [Llama-3.1-Tulu-3-405B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-405B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b4585](https://github.com/ggerganov/llama.cpp/releases/tag/b4585)<br>
## Technical Details
Supports a context length of 128k tokens.
Fully open source data, code, and recipes.
Designed for state of the art performance across many tasks.
More details from their original paper available [here](https://arxiv.org/abs/2411.15124).
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
croissantllm/base_140k | croissantllm | "2024-02-01T15:56:50" | 35 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-01-18T14:30:08" |
---
license: mit
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (140k steps)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 140k steps (2.2 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
Coming soon
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/base_140k"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.
He is heading to the market. -> Il va au marché.
We are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
|
LLMJapan/nvidia_AceInstruct-72B-exl2-3.0bpw | LLMJapan | "2025-02-14T12:18:12" | 0 | 0 | null | [
"safetensors",
"qwen2",
"nvidia",
"AceInstruct",
"code",
"math",
"general_domain",
"instruct_model",
"text-generation",
"conversational",
"en",
"base_model:nvidia/AceInstruct-72B",
"base_model:quantized:nvidia/AceInstruct-72B",
"license:cc-by-nc-4.0",
"3-bit",
"exl2",
"region:us"
] | text-generation | "2025-02-14T11:42:54" | ---
quantized_by: LLMJapan
pipeline_tag: text-generation
license: cc-by-nc-4.0
language:
- en
tags:
- nvidia
- AceInstruct
- code
- math
- general_domain
- instruct_model
base_model: nvidia/AceInstruct-72B
---
## Exllama v2 Quantizations of AceInstruct-72B by nvidia
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.2.8">turboderp's ExLlamaV2 v0.2.8</a> for quantization.
Original model: https://huggingface.co/nvidia/AceInstruct-72B
Quantization Command Example for creating other bpw quantization
```
cd {your git clone directory}
python convert.py -i {path to}/AceInstruct-72B -o {path to}/AceInstruct-72B/workingdir -cf {path to}/AceInstruct-72B/AceInstruct-72B-3bpw -b 3.0
```
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## How to add your system prompt
Copy the following json and replace the "You are AceInstruct developed by NVIDIA. You are helpful assistant." sentence with your original system prompt.
The default tokenizer_config.json does not have system prompt.
tokenizer_config.json
```
"chat_template": "{{- '<|im_start|>system\\nYou are AceInstruct developed by NVIDIA. You are helpful assistant.<|im_end|>\\n' }}\n {%- for message in messages %}\n{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n{{- '<|im_start|>assistant\n' }}\n{%- endif %}\n",
```
## File information
| quantization type | file size |
| ----------------------- | ----------: |
| 3.0bpw | 27.8 GiB |
## Benchmark Results
| | Qwen2.5-1.5B-Instruct | AceInstruct-1.5B | Qwen2.5-7B-Instruct | AceInstruct-7B | Qwen2.5-72B-Instruct | AceInstruct-72B |
| --------- |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| HumanEval | 61.60 | 73.17 | 84.80 | 85.37 | 86.60 | 89.63 |
| MBPP | 63.20 | 65.76 | 79.20 | 74.32 | 88.20 | 83.66 |
| GSM8K | 73.20 | 80.44 | 91.60 | 93.10 | 95.80 | 96.36 |
| MATH | 55.20 | 60.34 | 75.50 | 76.40 | 83.10 | 84.50 |
| MMLU | 58.37 | 58.17 | 74.51 | 74.68 | 84.67 | 83.88 |
| MMLU Pro | 32.40 | 33.78 | 56.30 | 54.50 | 71.10 | 66.10 |
| Average | 57.33 | 61.94 | 76.99 | 76.40 | 84.91 | 84.02 |
## Credits
Thanks to NVIDIA team.
---
license: cc-by-nc-4.0
---
|
saberzl/SIDA-13B | saberzl | "2025-03-14T10:22:58" | 1 | 1 | null | [
"pytorch",
"llava",
"image-segmentation",
"en",
"dataset:saberzl/SID_Set",
"arxiv:2412.04292",
"base_model:xinlai/LISA-13B-llama2-v1",
"base_model:finetune:xinlai/LISA-13B-llama2-v1",
"license:llama2",
"region:us"
] | image-segmentation | "2025-03-13T18:47:26" | ---
license: llama2
datasets:
- saberzl/SID_Set
language:
- en
metrics:
- accuracy
base_model:
- xinlai/LISA-13B-llama2-v1
pipeline_tag: image-segmentation
---
# SIDA Model Card
## Model details
**Model type:**
SIDA is a model fine-tuned from LISA, designed to detect and localize tampered regions in images.
**Model date:**
SIDA-13B was trained in Febuary 2025.
**Paper or resources for more information:**
Paper: https://arxiv.org/pdf/2412.04292
Resource: https://github.com/hzlsaber/SIDA
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Trained Data
SIDA was trained on SID_Set, which consists of real images, tampered images, and fully synthetic images. More information is available [here](https://huggingface.co/datasets/saberzl/SID_Set)
## Citation Information
If you find this dataset useful, please consider citing our paper:
```
@misc{huang2025sidasocialmediaimage,
title={SIDA: Social Media Image Deepfake Detection, Localization and Explanation with Large Multimodal Model},
author={Zhenglin Huang and Jinwei Hu and Xiangtai Li and Yiwei He and Xingyu Zhao and Bei Peng and Baoyuan Wu and Xiaowei Huang and Guangliang Cheng},
year={2025},
booktitle={Conference on Computer Vision and Pattern Recognition}
}
``` |
KwaiVGI/LivePortrait | KwaiVGI | "2025-03-03T16:17:36" | 4,150 | 355 | liveportrait | [
"liveportrait",
"onnx",
"image-to-video",
"arxiv:2407.03168",
"license:mit",
"region:us"
] | image-to-video | "2024-07-08T15:39:36" | ---
license: mit
library_name: liveportrait
pipeline_tag: image-to-video
---
<h1 align="center">LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control</h1>
<div align='center'>
<a href='https://github.com/cleardusk' target='_blank'><strong>Jianzhu Guo</strong></a><sup> 1*†</sup> 
<a href='https://github.com/Mystery099' target='_blank'><strong>Dingyun Zhang</strong></a><sup> 1,2*</sup> 
<a href='https://github.com/KwaiVGI' target='_blank'><strong>Xiaoqiang Liu</strong></a><sup> 1</sup> 
<a href='https://github.com/zzzweakman' target='_blank'><strong>Zhizhou Zhong</strong></a><sup> 1,3</sup> 
<a href='https://scholar.google.com.hk/citations?user=_8k1ubAAAAAJ' target='_blank'><strong>Yuan Zhang</strong></a><sup> 1</sup> 
</div>
<div align='center'>
<a href='https://scholar.google.com/citations?user=P6MraaYAAAAJ' target='_blank'><strong>Pengfei Wan</strong></a><sup> 1</sup> 
<a href='https://openreview.net/profile?id=~Di_ZHANG3' target='_blank'><strong>Di Zhang</strong></a><sup> 1</sup> 
</div>
<div align='center'>
<sup>1 </sup>Kuaishou Technology  <sup>2 </sup>University of Science and Technology of China  <sup>3 </sup>Fudan University 
</div>
<div align='center'>
<small><sup>*</sup> Equal contributions</small>
<small><sup>†</sup> Corresponding author</small>
</div>
<div align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
<!-- <a href='LICENSE'><img src='https://img.shields.io/badge/license-MIT-yellow'></a> -->
<a href='https://arxiv.org/pdf/2407.03168'><img src='https://img.shields.io/badge/arXiv-LivePortrait-red'></a>
<a href='https://liveportrait.github.io'><img src='https://img.shields.io/badge/Project-LivePortrait-green'></a>
<a href='https://huggingface.co/spaces/KwaiVGI/liveportrait'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
<a href="https://github.com/KwaiVGI/LivePortrait"><img src="https://img.shields.io/github/stars/KwaiVGI/LivePortrait"></a>
</div>
<br>
<p align="center">
<img src="./docs/showcase2.gif" alt="showcase">
🔥 For more results, visit our <a href="https://liveportrait.github.io/"><strong>homepage</strong></a> 🔥
</p>
## 🔥 Updates
- **`2024/08/02`**: 😸 We released a version of the **Animals model**, along with several other updates and improvements. Check out the details [**here**](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-08-02.md)!
- **`2024/07/25`**: 📦 Windows users can now download the package from [HuggingFace](https://huggingface.co/cleardusk/LivePortrait-Windows/tree/main) or [BaiduYun](https://pan.baidu.com/s/1FWsWqKe0eNfXrwjEhhCqlw?pwd=86q2). Simply unzip and double-click `run_windows.bat` to enjoy!
- **`2024/07/24`**: 🎨 We support pose editing for source portraits in the Gradio interface. We’ve also lowered the default detection threshold to increase recall. [Have fun](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-07-24.md)!
- **`2024/07/19`**: ✨ We support 🎞️ portrait video editing (aka v2v)! More to see [here](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-07-19.md).
- **`2024/07/17`**: 🍎 We support macOS with Apple Silicon, modified from [jeethu](https://github.com/jeethu)'s PR [#143](https://github.com/KwaiVGI/LivePortrait/pull/143).
- **`2024/07/10`**: 💪 We support audio and video concatenating, driving video auto-cropping, and template making to protect privacy. More to see [here](https://github.com/KwaiVGI/LivePortrait/blob/main/assets/docs/changelog/2024-07-10.md).
- **`2024/07/09`**: 🤗 We released the [HuggingFace Space](https://huggingface.co/spaces/KwaiVGI/liveportrait), thanks to the HF team and [Gradio](https://github.com/gradio-app/gradio)!
- **`2024/07/04`**: 😊 We released the initial version of the inference code and models. Continuous updates, stay tuned!
- **`2024/07/04`**: 🔥 We released the [homepage](https://liveportrait.github.io) and technical report on [arXiv](https://arxiv.org/pdf/2407.03168).
## Introduction 📖
This repo, named **LivePortrait**, contains the official PyTorch implementation of our paper [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/pdf/2407.03168).
We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) 💖.
## Getting Started 🏁
### 1. Clone the code and prepare the environment
```bash
git clone https://github.com/KwaiVGI/LivePortrait
cd LivePortrait
# create env using conda
conda create -n LivePortrait python==3.9
conda activate LivePortrait
# install dependencies with pip
# for Linux and Windows users
pip install -r requirements.txt
# for macOS with Apple Silicon users
pip install -r requirements_macOS.txt
```
**Note:** make sure your system has [FFmpeg](https://ffmpeg.org/download.html) installed, including both `ffmpeg` and `ffprobe`!
### 2. Download pretrained weights
The easiest way to download the pretrained weights is from HuggingFace:
```bash
# first, ensure git-lfs is installed, see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage
git lfs install
# clone and move the weights
git clone https://huggingface.co/KwaiVGI/LivePortrait temp_pretrained_weights
mv temp_pretrained_weights/* pretrained_weights/
rm -rf temp_pretrained_weights
```
Alternatively, you can download all pretrained weights from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). Unzip and place them in `./pretrained_weights`.
Ensuring the directory structure is as follows, or contains:
```text
pretrained_weights
├── insightface
│ └── models
│ └── buffalo_l
│ ├── 2d106det.onnx
│ └── det_10g.onnx
└── liveportrait
├── base_models
│ ├── appearance_feature_extractor.pth
│ ├── motion_extractor.pth
│ ├── spade_generator.pth
│ └── warping_module.pth
├── landmark.onnx
└── retargeting_models
└── stitching_retargeting_module.pth
```
### 3. Inference 🚀
#### Fast hands-on
```bash
# For Linux and Windows
python inference.py
# For macOS with Apple Silicon, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python inference.py
```
If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4`. This file includes the following results: driving video, input image or video, and generated result.
<p align="center">
<img src="./docs/inference.gif" alt="image">
</p>
Or, you can change the input by specifying the `-s` and `-d` arguments:
```bash
# source input is an image
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
# source input is a video ✨
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4
# more options to see
python inference.py -h
```
#### Driving video auto-cropping 📢📢📢
To use your own driving video, we **recommend**: ⬇️
- Crop it to a **1:1** aspect ratio (e.g., 512x512 or 256x256 pixels), or enable auto-cropping by `--flag_crop_driving_video`.
- Focus on the head area, similar to the example videos.
- Minimize shoulder movement.
- Make sure the first frame of driving video is a frontal face with **neutral expression**.
Below is a auto-cropping case by `--flag_crop_driving_video`:
```bash
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
```
If you find the results of auto-cropping is not well, you can modify the `--scale_crop_driving_video`, `--vy_ratio_crop_driving_video` options to adjust the scale and offset, or do it manually.
#### Motion template making
You can also use the auto-generated motion template files ending with `.pkl` to speed up inference, and **protect privacy**, such as:
```bash
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl # portrait animation
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d5.pkl # portrait video editing
```
### 4. Gradio interface 🤗
We also provide a Gradio <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a> interface for a better experience, just run by:
```bash
# For Linux and Windows users (and macOS with Intel??)
python app.py
# For macOS with Apple Silicon users, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python app.py
```
You can specify the `--server_port`, `--share`, `--server_name` arguments to satisfy your needs!
🚀 We also provide an acceleration option `--flag_do_torch_compile`. The first-time inference triggers an optimization process (about one minute), making subsequent inferences 20-30% faster. Performance gains may vary with different CUDA versions.
```bash
# enable torch.compile for faster inference
python app.py --flag_do_torch_compile
```
**Note**: This method is not supported on Windows and macOS.
**Or, try it out effortlessly on [HuggingFace](https://huggingface.co/spaces/KwaiVGI/LivePortrait) 🤗**
### 5. Inference speed evaluation 🚀🚀🚀
We have also provided a script to evaluate the inference speed of each module:
```bash
# For NVIDIA GPU
python speed.py
```
Below are the results of inferring one frame on an RTX 4090 GPU using the native PyTorch framework with `torch.compile`:
| Model | Parameters(M) | Model Size(MB) | Inference(ms) |
|-----------------------------------|:-------------:|:--------------:|:-------------:|
| Appearance Feature Extractor | 0.84 | 3.3 | 0.82 |
| Motion Extractor | 28.12 | 108 | 0.84 |
| Spade Generator | 55.37 | 212 | 7.59 |
| Warping Module | 45.53 | 174 | 5.21 |
| Stitching and Retargeting Modules | 0.23 | 2.3 | 0.31 |
*Note: The values for the Stitching and Retargeting Modules represent the combined parameter counts and total inference time of three sequential MLP networks.*
## Community Resources 🤗
Discover the invaluable resources contributed by our community to enhance your LivePortrait experience:
- [ComfyUI-LivePortraitKJ](https://github.com/kijai/ComfyUI-LivePortraitKJ) by [@kijai](https://github.com/kijai)
- [comfyui-liveportrait](https://github.com/shadowcz007/comfyui-liveportrait) by [@shadowcz007](https://github.com/shadowcz007)
- [LivePortrait In ComfyUI](https://www.youtube.com/watch?v=aFcS31OWMjE) by [@Benji](https://www.youtube.com/@TheFutureThinker)
- [LivePortrait hands-on tutorial](https://www.youtube.com/watch?v=uyjSTAOY7yI) by [@AI Search](https://www.youtube.com/@theAIsearch)
- [ComfyUI tutorial](https://www.youtube.com/watch?v=8-IcDDmiUMM) by [@Sebastian Kamph](https://www.youtube.com/@sebastiankamph)
- [Replicate Playground](https://replicate.com/fofr/live-portrait) and [cog-comfyui](https://github.com/fofr/cog-comfyui) by [@fofr](https://github.com/fofr)
And many more amazing contributions from our community!
## Acknowledgements 💐
We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) repositories, for their open research and contributions.
## Citation 💖
If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
```bibtex
@article{guo2024liveportrait,
title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
journal = {arXiv preprint arXiv:2407.03168},
year = {2024}
}
```
*Long live in arXiv.*
## Contact 📧
[**Jianzhu Guo (郭建珠)**](https://guojianzhu.com); **[email protected]**
|
zpdlsprtm/my_awesome_billsum_model | zpdlsprtm | "2024-05-21T05:32:52" | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-21T05:27:47" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5093
- Rouge1: 0.1421
- Rouge2: 0.049
- Rougel: 0.1164
- Rougelsum: 0.1163
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8023 | 0.124 | 0.0327 | 0.1044 | 0.1044 | 19.0 |
| No log | 2.0 | 124 | 2.5922 | 0.1325 | 0.0397 | 0.1085 | 0.1088 | 19.0 |
| No log | 3.0 | 186 | 2.5274 | 0.1398 | 0.0473 | 0.1152 | 0.1153 | 19.0 |
| No log | 4.0 | 248 | 2.5093 | 0.1421 | 0.049 | 0.1164 | 0.1163 | 19.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
RichardErkhov/tistak_-_audhL4lz1GGwkJ6F-8bits | RichardErkhov | "2025-03-09T10:14:04" | 0 | 0 | null | [
"safetensors",
"phi3",
"custom_code",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-09T10:10:56" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
audhL4lz1GGwkJ6F - bnb 8bits
- Model creator: https://huggingface.co/tistak/
- Original model: https://huggingface.co/tistak/audhL4lz1GGwkJ6F/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/FastLlama-3.2-1B-Instruct-GGUF | QuantFactory | "2024-12-12T11:52:32" | 155 | 1 | transformers | [
"transformers",
"gguf",
"math",
"lora",
"science",
"chemistry",
"biology",
"code",
"text-generation-inference",
"unsloth",
"llama",
"en",
"de",
"es",
"fr",
"it",
"pt",
"hi",
"th",
"dataset:HuggingFaceTB/smoltalk",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-12T11:44:26" |
---
library_name: transformers
tags:
- math
- lora
- science
- chemistry
- biology
- code
- text-generation-inference
- unsloth
- llama
license: apache-2.0
datasets:
- HuggingFaceTB/smoltalk
language:
- en
- de
- es
- fr
- it
- pt
- hi
- th
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
[](https://hf.co/QuantFactory)
# QuantFactory/FastLlama-3.2-1B-Instruct-GGUF
This is quantized version of [suayptalha/FastLlama-3.2-1B-Instruct](https://huggingface.co/suayptalha/FastLlama-3.2-1B-Instruct) created using llama.cpp
# Original Model Card

You can use ChatML & Alpaca format.
You can chat with the model via this [space](https://huggingface.co/spaces/suayptalha/Chat-with-FastLlama).
**Overview:**
FastLlama is a highly optimized version of the Llama-3.2-1B-Instruct model. Designed for superior performance in constrained environments, it combines speed, compactness, and high accuracy. This version has been fine-tuned using the MetaMathQA-50k section of the HuggingFaceTB/smoltalk dataset to enhance its mathematical reasoning and problem-solving abilities.
**Features:**
Lightweight and Fast: Optimized to deliver Llama-class capabilities with reduced computational overhead.
Fine-Tuned for Math Reasoning: Utilizes MetaMathQA-50k for better handling of complex mathematical problems and logical reasoning tasks.
Instruction-Tuned: Pre-trained on instruction-following tasks, making it robust in understanding and executing detailed queries.
Versatile Use Cases: Suitable for educational tools, tutoring systems, or any application requiring mathematical reasoning.
**Performance Highlights:**
Smaller Footprint: The model delivers comparable results to larger counterparts while operating efficiently on smaller hardware.
Enhanced Accuracy: Demonstrates improved performance on mathematical QA benchmarks.
Instruction Adherence: Retains high fidelity in understanding and following user instructions, even for complex queries.
**Loading the Model:**
```py
import torch
from transformers import pipeline
model_id = "suayptalha/FastLlama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a friendly assistant named FastLlama."},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
**Dataset:**
Dataset: MetaMathQA-50k
The MetaMathQA-50k subset of HuggingFaceTB/smoltalk was selected for fine-tuning due to its focus on mathematical reasoning, multi-step problem-solving, and logical inference. The dataset includes:
Algebraic problems
Geometric reasoning tasks
Statistical and probabilistic questions
Logical deduction problems
**Model Fine-Tuning:**
Fine-tuning was conducted using the following configuration:
Learning Rate: 2e-4
Epochs: 1
Optimizer: AdamW
Framework: Unsloth
**License:**
This model is licensed under the Apache 2.0 License. See the LICENSE file for details.
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/suayptalha)
|
familiesportrait/portraitzeichnenlassen | familiesportrait | "2022-05-20T10:35:24" | 0 | 0 | null | [
"region:us"
] | null | "2022-05-20T10:34:07" | Und wenn Sie es jemals satt haben, Ihr eigenes Bild zu zeichnen, können Sie sich jederzeit mit einem Freund treffen und üben, Porträts voneinander zu zeichnen.
[https://familiesportrait.de/products/portrait-zeichnen-lassen](https://familiesportrait.de/products/portrait-zeichnen-lassen)
|
Samaneh/xlm-roberta-base-finetuned-panx-de | Samaneh | "2022-11-16T02:18:35" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-11-16T01:53:54" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k15_task1_organization | MayBashendy | "2025-01-18T12:58:46" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-18T01:51:51" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k15_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k15_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6630
- Qwk: 0.3582
- Mse: 1.6630
- Rmse: 1.2896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0278 | 2 | 6.8743 | 0.0116 | 6.8743 | 2.6219 |
| No log | 0.0556 | 4 | 4.6163 | 0.0917 | 4.6163 | 2.1486 |
| No log | 0.0833 | 6 | 3.5713 | -0.0212 | 3.5713 | 1.8898 |
| No log | 0.1111 | 8 | 2.4010 | 0.2254 | 2.4010 | 1.5495 |
| No log | 0.1389 | 10 | 1.9682 | 0.2951 | 1.9682 | 1.4029 |
| No log | 0.1667 | 12 | 2.0748 | 0.1575 | 2.0748 | 1.4404 |
| No log | 0.1944 | 14 | 2.2708 | 0.1119 | 2.2708 | 1.5069 |
| No log | 0.2222 | 16 | 2.3306 | 0.0972 | 2.3306 | 1.5266 |
| No log | 0.25 | 18 | 2.2378 | 0.1127 | 2.2378 | 1.4959 |
| No log | 0.2778 | 20 | 1.8753 | 0.2521 | 1.8753 | 1.3694 |
| No log | 0.3056 | 22 | 1.6925 | 0.2241 | 1.6925 | 1.3009 |
| No log | 0.3333 | 24 | 1.8527 | 0.3810 | 1.8527 | 1.3611 |
| No log | 0.3611 | 26 | 1.9788 | 0.3088 | 1.9788 | 1.4067 |
| No log | 0.3889 | 28 | 1.5871 | 0.368 | 1.5871 | 1.2598 |
| No log | 0.4167 | 30 | 1.4117 | 0.2569 | 1.4117 | 1.1882 |
| No log | 0.4444 | 32 | 1.7661 | 0.2018 | 1.7661 | 1.3290 |
| No log | 0.4722 | 34 | 1.5367 | 0.1698 | 1.5367 | 1.2396 |
| No log | 0.5 | 36 | 1.3519 | 0.3333 | 1.3519 | 1.1627 |
| No log | 0.5278 | 38 | 1.3514 | 0.4390 | 1.3514 | 1.1625 |
| No log | 0.5556 | 40 | 1.3491 | 0.3810 | 1.3491 | 1.1615 |
| No log | 0.5833 | 42 | 1.2180 | 0.48 | 1.2180 | 1.1036 |
| No log | 0.6111 | 44 | 1.1629 | 0.4677 | 1.1629 | 1.0784 |
| No log | 0.6389 | 46 | 1.1778 | 0.4034 | 1.1778 | 1.0853 |
| No log | 0.6667 | 48 | 1.1365 | 0.4603 | 1.1365 | 1.0661 |
| No log | 0.6944 | 50 | 1.0843 | 0.4724 | 1.0843 | 1.0413 |
| No log | 0.7222 | 52 | 1.0515 | 0.4516 | 1.0515 | 1.0254 |
| No log | 0.75 | 54 | 1.1179 | 0.4959 | 1.1179 | 1.0573 |
| No log | 0.7778 | 56 | 1.0262 | 0.5802 | 1.0262 | 1.0130 |
| No log | 0.8056 | 58 | 1.0705 | 0.5455 | 1.0705 | 1.0346 |
| No log | 0.8333 | 60 | 1.0849 | 0.5693 | 1.0849 | 1.0416 |
| No log | 0.8611 | 62 | 1.0861 | 0.6015 | 1.0861 | 1.0421 |
| No log | 0.8889 | 64 | 1.5668 | 0.3651 | 1.5668 | 1.2517 |
| No log | 0.9167 | 66 | 1.3472 | 0.4769 | 1.3472 | 1.1607 |
| No log | 0.9444 | 68 | 0.8965 | 0.6418 | 0.8965 | 0.9468 |
| No log | 0.9722 | 70 | 1.0427 | 0.5692 | 1.0427 | 1.0211 |
| No log | 1.0 | 72 | 1.1167 | 0.5038 | 1.1167 | 1.0568 |
| No log | 1.0278 | 74 | 1.0754 | 0.5263 | 1.0754 | 1.0370 |
| No log | 1.0556 | 76 | 0.8481 | 0.6618 | 0.8481 | 0.9209 |
| No log | 1.0833 | 78 | 0.8877 | 0.6667 | 0.8877 | 0.9422 |
| No log | 1.1111 | 80 | 0.8609 | 0.6377 | 0.8609 | 0.9278 |
| No log | 1.1389 | 82 | 0.7850 | 0.6667 | 0.7850 | 0.8860 |
| No log | 1.1667 | 84 | 0.8171 | 0.6763 | 0.8171 | 0.9040 |
| No log | 1.1944 | 86 | 0.8704 | 0.6957 | 0.8704 | 0.9330 |
| No log | 1.2222 | 88 | 0.8779 | 0.7101 | 0.8779 | 0.9370 |
| No log | 1.25 | 90 | 1.0684 | 0.6370 | 1.0684 | 1.0336 |
| No log | 1.2778 | 92 | 1.1970 | 0.5538 | 1.1970 | 1.0941 |
| No log | 1.3056 | 94 | 1.3931 | 0.3937 | 1.3931 | 1.1803 |
| No log | 1.3333 | 96 | 1.6619 | 0.2927 | 1.6619 | 1.2892 |
| No log | 1.3611 | 98 | 1.5432 | 0.3333 | 1.5432 | 1.2422 |
| No log | 1.3889 | 100 | 1.3301 | 0.4228 | 1.3301 | 1.1533 |
| No log | 1.4167 | 102 | 1.2872 | 0.4762 | 1.2872 | 1.1345 |
| No log | 1.4444 | 104 | 1.1537 | 0.512 | 1.1537 | 1.0741 |
| No log | 1.4722 | 106 | 1.1444 | 0.4688 | 1.1444 | 1.0698 |
| No log | 1.5 | 108 | 1.0963 | 0.5827 | 1.0963 | 1.0470 |
| No log | 1.5278 | 110 | 1.1601 | 0.5231 | 1.1601 | 1.0771 |
| No log | 1.5556 | 112 | 1.1994 | 0.5354 | 1.1994 | 1.0952 |
| No log | 1.5833 | 114 | 1.1301 | 0.4615 | 1.1301 | 1.0630 |
| No log | 1.6111 | 116 | 1.1912 | 0.4274 | 1.1912 | 1.0914 |
| No log | 1.6389 | 118 | 1.1762 | 0.3684 | 1.1762 | 1.0845 |
| No log | 1.6667 | 120 | 1.1985 | 0.4576 | 1.1985 | 1.0948 |
| No log | 1.6944 | 122 | 1.2099 | 0.4878 | 1.2099 | 1.0999 |
| No log | 1.7222 | 124 | 1.0720 | 0.5041 | 1.0720 | 1.0354 |
| No log | 1.75 | 126 | 0.9309 | 0.5873 | 0.9309 | 0.9648 |
| No log | 1.7778 | 128 | 0.9126 | 0.6142 | 0.9126 | 0.9553 |
| No log | 1.8056 | 130 | 0.9821 | 0.5865 | 0.9821 | 0.9910 |
| No log | 1.8333 | 132 | 1.2015 | 0.5231 | 1.2015 | 1.0961 |
| No log | 1.8611 | 134 | 1.5060 | 0.4 | 1.5060 | 1.2272 |
| No log | 1.8889 | 136 | 1.4208 | 0.4444 | 1.4208 | 1.1920 |
| No log | 1.9167 | 138 | 1.0659 | 0.5891 | 1.0659 | 1.0324 |
| No log | 1.9444 | 140 | 1.0502 | 0.5484 | 1.0502 | 1.0248 |
| No log | 1.9722 | 142 | 1.0895 | 0.5124 | 1.0895 | 1.0438 |
| No log | 2.0 | 144 | 1.2259 | 0.544 | 1.2259 | 1.1072 |
| No log | 2.0278 | 146 | 1.4841 | 0.4559 | 1.4841 | 1.2182 |
| No log | 2.0556 | 148 | 1.4208 | 0.4662 | 1.4208 | 1.1920 |
| No log | 2.0833 | 150 | 1.2154 | 0.5469 | 1.2154 | 1.1024 |
| No log | 2.1111 | 152 | 1.0988 | 0.5736 | 1.0988 | 1.0482 |
| No log | 2.1389 | 154 | 1.1331 | 0.5469 | 1.1331 | 1.0645 |
| No log | 2.1667 | 156 | 1.2490 | 0.5271 | 1.2490 | 1.1176 |
| No log | 2.1944 | 158 | 1.6139 | 0.3333 | 1.6139 | 1.2704 |
| No log | 2.2222 | 160 | 1.7910 | 0.2429 | 1.7910 | 1.3383 |
| No log | 2.25 | 162 | 1.5982 | 0.3333 | 1.5982 | 1.2642 |
| No log | 2.2778 | 164 | 1.4828 | 0.3768 | 1.4828 | 1.2177 |
| No log | 2.3056 | 166 | 1.3001 | 0.5224 | 1.3001 | 1.1402 |
| No log | 2.3333 | 168 | 1.1274 | 0.5692 | 1.1274 | 1.0618 |
| No log | 2.3611 | 170 | 0.9726 | 0.6094 | 0.9726 | 0.9862 |
| No log | 2.3889 | 172 | 0.9710 | 0.6519 | 0.9710 | 0.9854 |
| No log | 2.4167 | 174 | 1.0677 | 0.6165 | 1.0677 | 1.0333 |
| No log | 2.4444 | 176 | 1.4016 | 0.4088 | 1.4016 | 1.1839 |
| No log | 2.4722 | 178 | 1.4981 | 0.3650 | 1.4981 | 1.2240 |
| No log | 2.5 | 180 | 1.2418 | 0.5271 | 1.2418 | 1.1144 |
| No log | 2.5278 | 182 | 0.9713 | 0.5984 | 0.9713 | 0.9856 |
| No log | 2.5556 | 184 | 0.9492 | 0.64 | 0.9492 | 0.9743 |
| No log | 2.5833 | 186 | 1.0347 | 0.6142 | 1.0347 | 1.0172 |
| No log | 2.6111 | 188 | 1.3287 | 0.5191 | 1.3287 | 1.1527 |
| No log | 2.6389 | 190 | 1.4793 | 0.4296 | 1.4793 | 1.2163 |
| No log | 2.6667 | 192 | 1.3827 | 0.5113 | 1.3827 | 1.1759 |
| No log | 2.6944 | 194 | 1.2632 | 0.5649 | 1.2632 | 1.1239 |
| No log | 2.7222 | 196 | 1.1863 | 0.6015 | 1.1863 | 1.0892 |
| No log | 2.75 | 198 | 1.1659 | 0.6119 | 1.1659 | 1.0798 |
| No log | 2.7778 | 200 | 1.0489 | 0.6015 | 1.0489 | 1.0242 |
| No log | 2.8056 | 202 | 1.0961 | 0.5758 | 1.0961 | 1.0470 |
| No log | 2.8333 | 204 | 1.4333 | 0.4412 | 1.4333 | 1.1972 |
| No log | 2.8611 | 206 | 1.6476 | 0.3478 | 1.6476 | 1.2836 |
| No log | 2.8889 | 208 | 1.4061 | 0.4179 | 1.4061 | 1.1858 |
| No log | 2.9167 | 210 | 1.0889 | 0.5891 | 1.0889 | 1.0435 |
| No log | 2.9444 | 212 | 1.0744 | 0.5484 | 1.0744 | 1.0365 |
| No log | 2.9722 | 214 | 1.1480 | 0.5210 | 1.1480 | 1.0715 |
| No log | 3.0 | 216 | 1.3145 | 0.5041 | 1.3145 | 1.1465 |
| No log | 3.0278 | 218 | 1.2711 | 0.528 | 1.2711 | 1.1274 |
| No log | 3.0556 | 220 | 1.2357 | 0.6047 | 1.2357 | 1.1116 |
| No log | 3.0833 | 222 | 1.2082 | 0.6047 | 1.2082 | 1.0992 |
| No log | 3.1111 | 224 | 1.1694 | 0.5938 | 1.1694 | 1.0814 |
| No log | 3.1389 | 226 | 1.2286 | 0.6047 | 1.2286 | 1.1084 |
| No log | 3.1667 | 228 | 1.2674 | 0.5538 | 1.2674 | 1.1258 |
| No log | 3.1944 | 230 | 1.2476 | 0.5781 | 1.2476 | 1.1170 |
| No log | 3.2222 | 232 | 1.1167 | 0.5920 | 1.1167 | 1.0568 |
| No log | 3.25 | 234 | 1.0856 | 0.5873 | 1.0856 | 1.0419 |
| No log | 3.2778 | 236 | 1.1765 | 0.5781 | 1.1765 | 1.0847 |
| No log | 3.3056 | 238 | 1.3801 | 0.4118 | 1.3801 | 1.1748 |
| No log | 3.3333 | 240 | 1.5313 | 0.3852 | 1.5313 | 1.2375 |
| No log | 3.3611 | 242 | 1.4282 | 0.4118 | 1.4282 | 1.1951 |
| No log | 3.3889 | 244 | 1.4478 | 0.4118 | 1.4478 | 1.2033 |
| No log | 3.4167 | 246 | 1.3077 | 0.4853 | 1.3077 | 1.1436 |
| No log | 3.4444 | 248 | 1.1386 | 0.5649 | 1.1386 | 1.0670 |
| No log | 3.4722 | 250 | 1.1834 | 0.5909 | 1.1834 | 1.0878 |
| No log | 3.5 | 252 | 1.3690 | 0.4361 | 1.3690 | 1.1700 |
| No log | 3.5278 | 254 | 1.4039 | 0.4265 | 1.4039 | 1.1848 |
| No log | 3.5556 | 256 | 1.2370 | 0.5077 | 1.2370 | 1.1122 |
| No log | 3.5833 | 258 | 1.0814 | 0.6165 | 1.0814 | 1.0399 |
| No log | 3.6111 | 260 | 0.9950 | 0.6165 | 0.9950 | 0.9975 |
| No log | 3.6389 | 262 | 1.0278 | 0.5538 | 1.0278 | 1.0138 |
| No log | 3.6667 | 264 | 1.1408 | 0.5191 | 1.1408 | 1.0681 |
| No log | 3.6944 | 266 | 1.0988 | 0.5455 | 1.0988 | 1.0482 |
| No log | 3.7222 | 268 | 0.9195 | 0.6308 | 0.9195 | 0.9589 |
| No log | 3.75 | 270 | 0.7676 | 0.6615 | 0.7676 | 0.8761 |
| No log | 3.7778 | 272 | 0.6992 | 0.6870 | 0.6992 | 0.8362 |
| No log | 3.8056 | 274 | 0.7475 | 0.6718 | 0.7475 | 0.8646 |
| No log | 3.8333 | 276 | 0.8413 | 0.6667 | 0.8413 | 0.9172 |
| No log | 3.8611 | 278 | 0.8686 | 0.5645 | 0.8686 | 0.9320 |
| No log | 3.8889 | 280 | 0.9617 | 0.4915 | 0.9617 | 0.9807 |
| No log | 3.9167 | 282 | 1.0578 | 0.4915 | 1.0578 | 1.0285 |
| No log | 3.9444 | 284 | 1.1851 | 0.5827 | 1.1851 | 1.0886 |
| No log | 3.9722 | 286 | 1.2644 | 0.5496 | 1.2644 | 1.1245 |
| No log | 4.0 | 288 | 1.4191 | 0.4580 | 1.4191 | 1.1913 |
| No log | 4.0278 | 290 | 1.3767 | 0.5113 | 1.3767 | 1.1733 |
| No log | 4.0556 | 292 | 1.0991 | 0.6061 | 1.0991 | 1.0484 |
| No log | 4.0833 | 294 | 1.0500 | 0.6418 | 1.0500 | 1.0247 |
| No log | 4.1111 | 296 | 1.2758 | 0.5224 | 1.2758 | 1.1295 |
| No log | 4.1389 | 298 | 1.6315 | 0.3609 | 1.6315 | 1.2773 |
| No log | 4.1667 | 300 | 2.0463 | 0.1353 | 2.0463 | 1.4305 |
| No log | 4.1944 | 302 | 1.9395 | 0.1515 | 1.9395 | 1.3926 |
| No log | 4.2222 | 304 | 1.5030 | 0.375 | 1.5030 | 1.2260 |
| No log | 4.25 | 306 | 1.1317 | 0.5556 | 1.1317 | 1.0638 |
| No log | 4.2778 | 308 | 1.0343 | 0.5620 | 1.0343 | 1.0170 |
| No log | 4.3056 | 310 | 1.0597 | 0.5806 | 1.0597 | 1.0294 |
| No log | 4.3333 | 312 | 1.2295 | 0.5191 | 1.2295 | 1.1088 |
| No log | 4.3611 | 314 | 1.3480 | 0.4733 | 1.3480 | 1.1610 |
| No log | 4.3889 | 316 | 1.2341 | 0.5 | 1.2341 | 1.1109 |
| No log | 4.4167 | 318 | 1.1085 | 0.5909 | 1.1085 | 1.0529 |
| No log | 4.4444 | 320 | 1.0203 | 0.6016 | 1.0203 | 1.0101 |
| No log | 4.4722 | 322 | 1.0681 | 0.6016 | 1.0681 | 1.0335 |
| No log | 4.5 | 324 | 1.1791 | 0.5156 | 1.1791 | 1.0859 |
| No log | 4.5278 | 326 | 1.2348 | 0.4844 | 1.2348 | 1.1112 |
| No log | 4.5556 | 328 | 1.2018 | 0.4921 | 1.2018 | 1.0963 |
| No log | 4.5833 | 330 | 1.1405 | 0.5 | 1.1405 | 1.0680 |
| No log | 4.6111 | 332 | 1.1577 | 0.5197 | 1.1577 | 1.0760 |
| No log | 4.6389 | 334 | 1.2673 | 0.4697 | 1.2673 | 1.1258 |
| No log | 4.6667 | 336 | 1.2188 | 0.4962 | 1.2188 | 1.1040 |
| No log | 4.6944 | 338 | 1.0474 | 0.6190 | 1.0474 | 1.0234 |
| No log | 4.7222 | 340 | 0.9932 | 0.5378 | 0.9932 | 0.9966 |
| No log | 4.75 | 342 | 1.0270 | 0.5085 | 1.0270 | 1.0134 |
| No log | 4.7778 | 344 | 1.1238 | 0.5299 | 1.1238 | 1.0601 |
| No log | 4.8056 | 346 | 1.2484 | 0.4522 | 1.2484 | 1.1173 |
| No log | 4.8333 | 348 | 1.4200 | 0.4407 | 1.4200 | 1.1916 |
| No log | 4.8611 | 350 | 1.5065 | 0.4031 | 1.5065 | 1.2274 |
| No log | 4.8889 | 352 | 1.4235 | 0.4427 | 1.4235 | 1.1931 |
| No log | 4.9167 | 354 | 1.2197 | 0.5312 | 1.2197 | 1.1044 |
| No log | 4.9444 | 356 | 0.9764 | 0.5691 | 0.9764 | 0.9882 |
| No log | 4.9722 | 358 | 0.9269 | 0.6299 | 0.9269 | 0.9628 |
| No log | 5.0 | 360 | 0.8689 | 0.6769 | 0.8689 | 0.9322 |
| No log | 5.0278 | 362 | 0.8423 | 0.6406 | 0.8423 | 0.9178 |
| No log | 5.0556 | 364 | 1.0728 | 0.6475 | 1.0728 | 1.0358 |
| No log | 5.0833 | 366 | 1.2336 | 0.4818 | 1.2336 | 1.1107 |
| No log | 5.1111 | 368 | 1.1522 | 0.5606 | 1.1522 | 1.0734 |
| No log | 5.1389 | 370 | 1.0483 | 0.6179 | 1.0483 | 1.0238 |
| No log | 5.1667 | 372 | 1.0668 | 0.5254 | 1.0668 | 1.0329 |
| No log | 5.1944 | 374 | 1.1254 | 0.4828 | 1.1254 | 1.0608 |
| No log | 5.2222 | 376 | 1.2095 | 0.5210 | 1.2095 | 1.0998 |
| No log | 5.25 | 378 | 1.3316 | 0.4882 | 1.3316 | 1.1539 |
| No log | 5.2778 | 380 | 1.3534 | 0.4885 | 1.3534 | 1.1633 |
| No log | 5.3056 | 382 | 1.3485 | 0.4697 | 1.3485 | 1.1612 |
| No log | 5.3333 | 384 | 1.2567 | 0.5152 | 1.2567 | 1.1210 |
| No log | 5.3611 | 386 | 1.0685 | 0.6260 | 1.0685 | 1.0337 |
| No log | 5.3889 | 388 | 1.0041 | 0.6047 | 1.0041 | 1.0021 |
| No log | 5.4167 | 390 | 1.0367 | 0.6142 | 1.0367 | 1.0182 |
| No log | 5.4444 | 392 | 1.1564 | 0.5736 | 1.1564 | 1.0753 |
| No log | 5.4722 | 394 | 1.3654 | 0.4427 | 1.3654 | 1.1685 |
| No log | 5.5 | 396 | 1.4832 | 0.4091 | 1.4832 | 1.2179 |
| No log | 5.5278 | 398 | 1.4654 | 0.4091 | 1.4654 | 1.2106 |
| No log | 5.5556 | 400 | 1.3551 | 0.4769 | 1.3551 | 1.1641 |
| No log | 5.5833 | 402 | 1.3279 | 0.5 | 1.3279 | 1.1523 |
| No log | 5.6111 | 404 | 1.3795 | 0.4769 | 1.3795 | 1.1745 |
| No log | 5.6389 | 406 | 1.4367 | 0.3817 | 1.4367 | 1.1986 |
| No log | 5.6667 | 408 | 1.4228 | 0.3636 | 1.4228 | 1.1928 |
| No log | 5.6944 | 410 | 1.3189 | 0.4806 | 1.3189 | 1.1484 |
| No log | 5.7222 | 412 | 1.1848 | 0.5714 | 1.1848 | 1.0885 |
| No log | 5.75 | 414 | 1.1514 | 0.5410 | 1.1514 | 1.0730 |
| No log | 5.7778 | 416 | 1.1606 | 0.5484 | 1.1606 | 1.0773 |
| No log | 5.8056 | 418 | 1.2638 | 0.5344 | 1.2638 | 1.1242 |
| No log | 5.8333 | 420 | 1.4272 | 0.4030 | 1.4272 | 1.1946 |
| No log | 5.8611 | 422 | 1.4285 | 0.4296 | 1.4285 | 1.1952 |
| No log | 5.8889 | 424 | 1.2430 | 0.5344 | 1.2430 | 1.1149 |
| No log | 5.9167 | 426 | 1.1906 | 0.5077 | 1.1906 | 1.0911 |
| No log | 5.9444 | 428 | 1.2950 | 0.4733 | 1.2950 | 1.1380 |
| No log | 5.9722 | 430 | 1.3304 | 0.4662 | 1.3304 | 1.1534 |
| No log | 6.0 | 432 | 1.2703 | 0.5077 | 1.2703 | 1.1271 |
| No log | 6.0278 | 434 | 1.2792 | 0.4733 | 1.2792 | 1.1310 |
| No log | 6.0556 | 436 | 1.4285 | 0.4242 | 1.4285 | 1.1952 |
| No log | 6.0833 | 438 | 1.6041 | 0.3852 | 1.6041 | 1.2665 |
| No log | 6.1111 | 440 | 1.5224 | 0.4030 | 1.5224 | 1.2339 |
| No log | 6.1389 | 442 | 1.3487 | 0.4511 | 1.3487 | 1.1613 |
| No log | 6.1667 | 444 | 1.2148 | 0.5 | 1.2148 | 1.1022 |
| No log | 6.1944 | 446 | 1.2255 | 0.5344 | 1.2255 | 1.1070 |
| No log | 6.2222 | 448 | 1.2421 | 0.5344 | 1.2421 | 1.1145 |
| No log | 6.25 | 450 | 1.2952 | 0.5077 | 1.2952 | 1.1381 |
| No log | 6.2778 | 452 | 1.3528 | 0.4806 | 1.3528 | 1.1631 |
| No log | 6.3056 | 454 | 1.4612 | 0.4545 | 1.4612 | 1.2088 |
| No log | 6.3333 | 456 | 1.4537 | 0.4545 | 1.4537 | 1.2057 |
| No log | 6.3611 | 458 | 1.2978 | 0.4923 | 1.2978 | 1.1392 |
| No log | 6.3889 | 460 | 1.0979 | 0.5366 | 1.0979 | 1.0478 |
| No log | 6.4167 | 462 | 0.9630 | 0.5920 | 0.9630 | 0.9813 |
| No log | 6.4444 | 464 | 0.9390 | 0.5920 | 0.9390 | 0.9690 |
| No log | 6.4722 | 466 | 1.0042 | 0.5645 | 1.0042 | 1.0021 |
| No log | 6.5 | 468 | 1.1948 | 0.4923 | 1.1948 | 1.0931 |
| No log | 6.5278 | 470 | 1.3385 | 0.4923 | 1.3385 | 1.1569 |
| No log | 6.5556 | 472 | 1.3264 | 0.4923 | 1.3264 | 1.1517 |
| No log | 6.5833 | 474 | 1.2129 | 0.4923 | 1.2129 | 1.1013 |
| No log | 6.6111 | 476 | 1.1085 | 0.5781 | 1.1085 | 1.0528 |
| No log | 6.6389 | 478 | 1.0693 | 0.6047 | 1.0693 | 1.0341 |
| No log | 6.6667 | 480 | 1.0078 | 0.5938 | 1.0078 | 1.0039 |
| No log | 6.6944 | 482 | 1.0951 | 0.5781 | 1.0951 | 1.0465 |
| No log | 6.7222 | 484 | 1.3836 | 0.5191 | 1.3836 | 1.1763 |
| No log | 6.75 | 486 | 1.5579 | 0.3817 | 1.5579 | 1.2481 |
| No log | 6.7778 | 488 | 1.4444 | 0.4697 | 1.4444 | 1.2018 |
| No log | 6.8056 | 490 | 1.3355 | 0.4923 | 1.3355 | 1.1557 |
| No log | 6.8333 | 492 | 1.2470 | 0.5116 | 1.2470 | 1.1167 |
| No log | 6.8611 | 494 | 1.2569 | 0.5116 | 1.2569 | 1.1211 |
| No log | 6.8889 | 496 | 1.1659 | 0.5469 | 1.1659 | 1.0798 |
| No log | 6.9167 | 498 | 1.1413 | 0.5469 | 1.1413 | 1.0683 |
| 0.3671 | 6.9444 | 500 | 1.2516 | 0.5191 | 1.2516 | 1.1187 |
| 0.3671 | 6.9722 | 502 | 1.4346 | 0.4394 | 1.4346 | 1.1978 |
| 0.3671 | 7.0 | 504 | 1.5741 | 0.3852 | 1.5741 | 1.2546 |
| 0.3671 | 7.0278 | 506 | 1.7685 | 0.2920 | 1.7685 | 1.3299 |
| 0.3671 | 7.0556 | 508 | 1.7300 | 0.2920 | 1.7300 | 1.3153 |
| 0.3671 | 7.0833 | 510 | 1.6630 | 0.3582 | 1.6630 | 1.2896 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_M-GGUF | Triangle104 | "2024-12-14T21:29:33" | 29 | 0 | null | [
"gguf",
"qwen",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Orion-zhen/dpo-toxic-zh",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:Crystalcareai/Intel-DPO-Pairs-Norefusals",
"base_model:Orion-zhen/Qwen2.5-7B-Instruct-Uncensored",
"base_model:quantized:Orion-zhen/Qwen2.5-7B-Instruct-Uncensored",
"license:gpl-3.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-12-14T21:28:28" | ---
language:
- zh
- en
license: gpl-3.0
tags:
- qwen
- uncensored
- llama-cpp
- gguf-my-repo
base_model: Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
datasets:
- NobodyExistsOnTheInternet/ToxicQAFinal
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Orion-zhen/dpo-toxic-zh
- unalignment/toxic-dpo-v0.2
- Crystalcareai/Intel-DPO-Pairs-Norefusals
pipeline_tag: text-generation
model-index:
- name: Qwen2.5-7B-Instruct-Uncensored
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 72.04
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 1.36
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.58
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.07
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
---
# Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_M-GGUF
This model was converted to GGUF format from [`Orion-zhen/Qwen2.5-7B-Instruct-Uncensored`](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored) for more details on the model.
---
Model details:
-
This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct.
However, I can still notice that though uncensored, the model fails to
generate detailed descriptions on certain extreme scenarios, which might
be associated with deletion on some pretrain datasets in Qwen's
pretraining stage.
Traning details
-
I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities.
SFT:
NobodyExistsOnTheInternet/ToxicQAFinal
anthracite-org/kalo-opus-instruct-22k-no-refusal
DPO:
Orion-zhen/dpo-toxic-zh
unalignment/toxic-dpo-v0.2
Crystalcareai/Intel-DPO-Pairs-Norefusals
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-uncensored-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-uncensored-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-uncensored-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-uncensored-q4_k_m.gguf -c 2048
```
|
ycfNTU/bloomz-560m_NER_CAUSAL_LM | ycfNTU | "2024-03-19T11:58:48" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-19T10:12:43" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imdanboy/ljspeech_tts_train_jets_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave | imdanboy | "2022-05-28T16:52:35" | 5 | 1 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | "2022-05-28T16:51:54" | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ljspeech
license: cc-by-4.0
---
## ESPnet2 TTS model
### `imdanboy/ljspeech_tts_train_jets_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
This model was trained by imdanboy using ljspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout c173c30930631731e6836c274a591ad571749741
pip install -e .
cd egs2/ljspeech/tts1
./run.sh --skip_data_prep false --skip_train true --download_model imdanboy/ljspeech_tts_train_jets_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_jets.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_jets_raw_phn_tacotron_g2p_en_no_space
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 39471
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- text2mel_loss
- min
- - train
- text2mel_loss
- min
- - train
- total_count
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 3000000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/text_shape.phn
- exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/text_shape.phn
- exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/collect_feats/energy.scp
- energy
- npy
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/collect_feats/energy.scp
- energy
- npy
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: true
token_list:
- <blank>
- <unk>
- AH0
- N
- T
- D
- S
- R
- L
- DH
- K
- Z
- IH1
- IH0
- M
- EH1
- W
- P
- AE1
- AH1
- V
- ER0
- F
- ','
- AA1
- B
- HH
- IY1
- UW1
- IY0
- AO1
- EY1
- AY1
- .
- OW1
- SH
- NG
- G
- ER1
- CH
- JH
- Y
- AW1
- TH
- UH1
- EH2
- OW0
- EY2
- AO0
- IH2
- AE2
- AY2
- AA2
- UW0
- EH0
- OY1
- EY0
- AO2
- ZH
- OW2
- AE0
- UW2
- AH2
- AY0
- IY2
- AW2
- AA0
- ''''
- ER2
- UH2
- '?'
- OY2
- '!'
- AW0
- UH0
- OY0
- ..
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: g2p_en_no_space
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/feats_stats.npz
tts: jets
tts_conf:
generator_type: jets_generator
generator_params:
adim: 256
aheads: 2
elayers: 4
eunits: 1024
dlayers: 4
dunits: 1024
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
use_masking: true
encoder_normalize_before: true
decoder_normalize_before: true
encoder_type: transformer
decoder_type: transformer
conformer_rel_pos_type: latest
conformer_pos_enc_layer_type: rel_pos
conformer_self_attn_layer_type: rel_selfattn
conformer_activation_type: swish
use_macaron_style_in_conformer: true
use_cnn_in_conformer: true
conformer_enc_kernel_size: 7
conformer_dec_kernel_size: 31
init_type: xavier_uniform
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
generator_out_channels: 1
generator_channels: 512
generator_global_channels: -1
generator_kernel_size: 7
generator_upsample_scales:
- 8
- 8
- 2
- 2
generator_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
generator_resblock_kernel_sizes:
- 3
- 7
- 11
generator_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
generator_use_additional_convs: true
generator_bias: true
generator_nonlinear_activation: LeakyReLU
generator_nonlinear_activation_params:
negative_slope: 0.1
generator_use_weight_norm: true
segment_size: 64
idim: 78
odim: 80
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_var: 1.0
lambda_align: 2.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: dio
pitch_extract_conf:
reduction_factor: 1
use_token_averaged_f0: false
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
reduction_factor: 1
use_token_averaged_energy: false
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/energy_stats.npz
required:
- output_dir
- token_list
version: '202204'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Daemontatox/Mini-Cogito-R1.1 | Daemontatox | "2025-02-25T19:24:37" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:Daemontatox/mini-Cogito-R1",
"base_model:finetune:Daemontatox/mini-Cogito-R1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-25T19:24:17" | ---
base_model: Daemontatox/mini-Cogito-R1
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Daemontatox
- **License:** apache-2.0
- **Finetuned from model :** Daemontatox/mini-Cogito-R1
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
messiah10/distilbert-base-uncased-finetuned-squad | messiah10 | "2024-04-09T02:41:11" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-04-09T01:21:47" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1999 | 1.0 | 5533 | 1.1604 |
| 0.9468 | 2.0 | 11066 | 1.1086 |
| 0.7487 | 3.0 | 16599 | 1.1613 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
MaverickAlex/R-FLAV-B-1-AIST | MaverickAlex | "2025-03-14T07:50:30" | 14 | 0 | diffusers | [
"diffusers",
"safetensors",
"audio-to-video",
"arxiv:2503.08307",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-03-13T15:45:28" | ---
license: cc-by-nc-4.0
tags:
- audio-to-video
library_name: diffusers
---
Models of [R-FLAV](https://arxiv.org/abs/2503.08307) trained on Landscape and AIST++ for 400k iterations.
For more info, please refer to the Github repository at https://github.com/ErgastiAlex/R-FLAV
To download the ckpts directly in the code you can do
```python
from huggingface_hub import hf_hub_download
import torch
from models import FLAV
model = FLAV.from_pretrained(args.model_ckpt)
hf_hub_download(repo_id="MaverickAlex/R-FLAV-B-1-LS", filename="vocoder/config.json")
vocoder_path = hf_hub_download(repo_id="MaverickAlex/R-FLAV-B-1-LS", filename="vocoder/vocoder.pt")
vocoder_path = vocoder_path.replace("vocoder.pt", "")
vocoder = Generator.from_pretrained(vocoder_path)
``` |
HPL/roberta-base-unlabeled-gab-semeval2023-task10-45000samplesample | HPL | "2022-11-13T03:41:26" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-11-13T02:36:25" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-unlabeled-gab-semeval2023-task10-45000samplesample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-unlabeled-gab-semeval2023-task10-45000samplesample
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4294 | 1.0 | 1407 | 2.2323 |
| 2.3091 | 2.0 | 2814 | 2.1470 |
| 2.23 | 3.0 | 4221 | 2.1767 |
| 2.1866 | 4.0 | 5628 | 2.1625 |
| 2.171 | 5.0 | 7035 | 2.1441 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.10.3
|
HSING-I/uuu_fine_tune_taipower | HSING-I | "2024-05-25T05:18:32" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-25T05:18:32" | ---
license: apache-2.0
---
|
TheBloke/Tulpar-7B-v0-GPTQ | TheBloke | "2023-09-27T12:48:40" | 26 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"base_model:HyperbeeAI/Tulpar-7b-v0",
"base_model:quantized:HyperbeeAI/Tulpar-7b-v0",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-09-10T14:27:38" | ---
language:
- en
license: llama2
library_name: transformers
model_name: Tulpar 7B v0
base_model: HyperbeeAI/Tulpar-7b-v0
inference: false
model_creator: HyperbeeAI
model_type: llama
prompt_template: '### User: {prompt}
### Assistant:
'
quantized_by: TheBloke
thumbnail: https://huggingface.co/HyperbeeAI/Tulpar-7b-v0/resolve/main/tulpar.png
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tulpar 7B v0 - GPTQ
- Model creator: [HyperbeeAI](https://huggingface.co/HyperbeeAI)
- Original model: [Tulpar 7B v0](https://huggingface.co/HyperbeeAI/Tulpar-7b-v0)
<!-- description start -->
## Description
This repo contains GPTQ model files for [HyperbeeAI's Tulpar 7B v0](https://huggingface.co/HyperbeeAI/Tulpar-7b-v0).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tulpar-7B-v0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF)
* [HyperbeeAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HyperbeeAI/Tulpar-7b-v0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-Hashes
```
### User: {prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Tulpar-7B-v0-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Tulpar-7B-v0-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Tulpar-7B-v0-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Tulpar-7B-v0-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Tulpar-7B-v0-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''### User: {prompt}
### Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: HyperbeeAI's Tulpar 7B v0
<p align="center">
<img src="https://huggingface.co/HyperbeeAI/Tulpar-7b-v0/resolve/main/tulpar.png" width="360" height="360" >
</p>
# Model Description
Tulpar-7b is a LLama2-7b-based model trained by HyperbeeAI. Training is done on a filtered and preprocessed instruction finetuning dataset that includes GPT-4 generated and generally curated datasets like Airoboros and Platypus.
# Example Usage
Loading the model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HyperbeeAI/Tulpar-7b-v0")
model = AutoModelForCausalLM.from_pretrained("HyperbeeAI/Tulpar-7b-v0", device_map="auto")
```
You can run inference with both of the following prompts:
```python
input_text="What is deep learning?"
prompt = f"### User: {input_text}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512)
print(tokenizer.decode(output[0]))
```
```python
input_text="What is deep learning?"
prompt = f"Question: {input_text}\n\nAnswer:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512)
print(tokenizer.decode(output[0]))
```
# Evaluation
Our offline HF Leaderboard evaluation results:
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|0.5614|
|*hellaswag*|acc_norm|0.7901|
|*mmlu*|acc_norm|0.5242|
|*truthfulqa_mc*|mc2|0.5160|
|**Average**|-|**0.5979**||
Other GPT4All evaluation results:
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|boolq|acc |0.8306|
|piqa|acc |0.7905|
| |acc_norm|0.7884|
|winogrande|acc |0.7159|
|openbookqa|acc |0.356|
| |acc_norm|0.448|
|**Average** (including HF leaderboard datasets) | | **0.6468** |
BigBenchHard results:
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|bigbench_causal_judgement |multiple_choice_grade|0.6105|
|bigbench_date_understanding |multiple_choice_grade|0.6423|
|bigbench_disambiguation_qa |multiple_choice_grade|0.3643|
|bigbench_dyck_languages |multiple_choice_grade|0.2000|
|bigbench_formal_fallacies_syllogisms_negation |multiple_choice_grade|0.5002|
|bigbench_geometric_shapes |multiple_choice_grade|0.0000|
| |exact_str_match |0.0000|
|bigbench_hyperbaton |multiple_choice_grade|0.6754|
|bigbench_logical_deduction_five_objects |multiple_choice_grade|0.2700|
|bigbench_logical_deduction_seven_objects |multiple_choice_grade|0.1929|
|bigbench_logical_deduction_three_objects |multiple_choice_grade|0.4133|
|bigbench_movie_recommendation |multiple_choice_grade|0.3000|
|bigbench_navigate |multiple_choice_grade|0.5000|
|bigbench_reasoning_about_colored_objects |multiple_choice_grade|0.5750|
|bigbench_ruin_names |multiple_choice_grade|0.3281|
|bigbench_salient_translation_error_detection |multiple_choice_grade|0.2976|
|bigbench_snarks |multiple_choice_grade|0.6022|
|bigbench_sports_understanding |multiple_choice_grade|0.5122|
|bigbench_temporal_sequences |multiple_choice_grade|0.1450|
|bigbench_tracking_shuffled_objects_five_objects |multiple_choice_grade|0.1976|
|bigbench_tracking_shuffled_objects_seven_objects|multiple_choice_grade|0.1440|
|bigbench_tracking_shuffled_objects_three_objects|multiple_choice_grade|0.4133|
|**Average**| |**0.3754**
# Ethical Considerations and Limitations
Tulpar is a technology with potential risks and limitations. This model is finetuned only in English and all language-related scenarios are not covered. As HyperbeeAI, we neither guarantee ethical, accurate, unbiased, objective responses nor endorse its outputs. Before deploying this model, you are advised to make safety tests for your use case.
|
kallilikhitha123/llama-Quantized-Model-8B_750_12-03-2025 | kallilikhitha123 | "2025-03-12T12:28:47" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-12T12:25:16" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mav23/selfrag_llama2_7b-GGUF | mav23 | "2024-12-02T12:55:44" | 31 | 0 | null | [
"gguf",
"arxiv:2310.11511",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-12-02T12:11:43" | ---
license: mit
---
This model is a 7B [Self-RAG](https://selfrag.github.io/) model that generates outputs to diverse user queries as well as *reflection tokens* to call the retrieval system adaptively and criticize its own output and retrieved passages.
Self-RAG is trained on our instruction-following corpora with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences.
See full descriptions in See full descriptions in [our paper](https://arxiv.org/abs/2310.11511).
## Usage
Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/blob/main/requirements.txt).
To run our full inference pipeline with a retrieval system and fine-grained tree decoding, please use [our code](https://github.com/AkariAsai/self-rag).
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
from vllm import LLM, SamplingParams
model = LLM("selfrag/selfrag_llama2_7b", download_dir="/gscratch/h2lab/akari/model_cache", dtype="half")
sampling_params = SamplingParams(temperature=0.0, top_p=1.0, max_tokens=100, skip_special_tokens=False)
def format_prompt(input, paragraph=None):
prompt = "### Instruction:\n{0}\n\n### Response:\n".format(input)
if paragraph is not None:
prompt += "[Retrieval]<paragraph>{0}</paragraph>".format(paragraph)
return prompt
query_1 = "Leave odd one out: twitter, instagram, whatsapp."
query_2 = "Can you tell me the difference between llamas and alpacas?"
queries = [query_1, query_2]
preds = model.generate([format_prompt(query) for query in queries], sampling_params)
for pred in preds:
print("Model prediction: {0}".format(pred.outputs[0].text))
# Model prediction: Twitter, Instagram, and WhatsApp are all social media platforms.[No Retrieval]WhatsApp is the odd one out because it is a messaging app, while Twitter and # Instagram are primarily used for sharing photos and videos.[Utility:5]</s> (this query doesn't require factual grounding; just skip retrieval and do normal instruction-following generation)
# Model prediction: Sure![Retrieval]<paragraph> ... (this query requires factual grounding, call a retriever)
# generate with retrieved passage
prompt = format_prompt("Can you tell me the difference between llamas and alpacas?", paragraph="The alpaca (Lama pacos) is a species of South American camelid mammal. It is similar to, and often confused with, the llama. Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.")
preds = model.generate([prompt], sampling_params)
print([pred.outputs[0].text for pred in preds])
# ['[Relevant]Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.[Fully supported][Utility:5]</s>']
```
## Input Format
As described in the `format_prompt` function, your input should be formed as
```
### Instruction:\n{instruction}\n\n### Response:\n".format(instruction)
```
or
```
### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
```
If you have additional input.
You can insert paragraphs anywhere after `### Response:\n"`, but make sure to mark paragraphs as paragraph tokens (i.e., `<paragraph>{0}</paragraph>`).
## Training details
Our training data is available at the HuggingFace dataset [selfrag_train_data](https://huggingface.co/datasets/selfrag/selfrag_train_data).
See our official repository for the training details.
We used 8 A100 40GB for training on the Stability HPC server.
## Citation and contact
If you use this model, please cite our work:
```
@article{asai2023selfrag,
author = {Asai, Akari and Wu, Zeqiu and Wang, Yizhong and Sil, Avirup and Hajishirzi, Hannaneh},
title = {{Self-RAG}: Learning to Retrieve, Generate, and Critique through Self-Reflection},
year = {2023},
journal = { arXiv preprint arXiv:2310.11511 },
URL = {https://arxiv.org/abs/2310.11511}
}
``` |
laquythang/18eea0f7-6da7-49fb-9ac1-60312cf0bd13 | laquythang | "2025-01-17T19:27:13" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-17T19:17:31" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 18eea0f7-6da7-49fb-9ac1-60312cf0bd13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8c59141bab786fea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8c59141bab786fea_train_data.json
type:
field_input: ''
field_instruction: da
field_output: da_bornholm
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/18eea0f7-6da7-49fb-9ac1-60312cf0bd13
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8c59141bab786fea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ee19f7d3-f2f0-497e-9124-ad96647dcce2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ee19f7d3-f2f0-497e-9124-ad96647dcce2
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 18eea0f7-6da7-49fb-9ac1-60312cf0bd13
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.4351 | 0.2558 | 200 | 5.4367 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
greatxue1/Qwen2.5-7B-Instruct-Follow | greatxue1 | "2025-03-12T12:52:27" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:greatxue1/alpaca-vff-naive",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-12T01:32:26" | ---
base_model: Qwen/Qwen2.5-7B-Instruct
datasets: greatxue1/alpaca-vff-naive
library_name: transformers
model_name: Qwen2.5-7B-Instruct-Follow
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-Follow
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [greatxue1/alpaca-vff-naive](https://huggingface.co/datasets/greatxue1/alpaca-vff-naive) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="greatxue1/Qwen2.5-7B-Instruct-Follow", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zhongkaixue-university-of-oxford/huggingface/runs/4rhac0oi)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.0.2
- Tokenizers: 0.20.3
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
JapGuy/MiroZbirka_v1_490Epochs_RVC_v2 | JapGuy | "2023-08-17T18:47:22" | 0 | 0 | null | [
"music",
"rvc",
"miro",
"meky",
"miroslav",
"zbirka",
"model",
"audio-to-audio",
"sk",
"cs",
"license:openrail",
"region:us"
] | audio-to-audio | "2023-08-16T18:57:13" | ---
license: openrail
language:
- sk
- cs
pipeline_tag: audio-to-audio
tags:
- music
- rvc
- miro
- meky
- miroslav
- zbirka
- model
---

# Miro " Meky " Žbirka [SK] (v1)
# 490 Epochs - RVC V2 - mangio-creep - 64 Hop Length
Trained on 8 minutes of isolated acapellas using UVR (Voc FT + Reverb HQ) + Audacity to remove parts with double vocals and vocals from others (+Noise Gate)
Isolated acapellas from:
Domino
Biela pani
Bezchybna
Balada o polnych vtakoch
Atlantida
Ako obrazok |
daviibrt/en_ner_jnlpba_md | daviibrt | "2024-02-09T15:49:40" | 2 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"license:cc-by-sa-3.0",
"model-index",
"region:us"
] | token-classification | "2024-02-09T15:49:10" | ---
tags:
- spacy
- token-classification
language:
- en
license: cc-by-sa-3.0
model-index:
- name: en_ner_jnlpba_md
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7696202532
- name: NER Recall
type: recall
value: 0.7536623845
- name: NER F Score
type: f_score
value: 0.7615577317
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.0
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.0
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.0
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.0
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.0
---
Spacy Models for Biomedical Text.
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_jnlpba_md` |
| **Version** | `0.5.3` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `attribute_ruler`, `lemmatizer`, `parser`, `ner` |
| **Components** | `tok2vec`, `tagger`, `attribute_ruler`, `lemmatizer`, `parser`, `ner` |
| **Vectors** | 4087446 keys, 50000 unique vectors (200 dimensions) |
| **Sources** | JNLPBA<br>OntoNotes 5<br>Common Crawl<br>GENIA 1.0 |
| **License** | `CC BY-SA 3.0` |
| **Author** | [Allen Institute for Artificial Intelligence](https://allenai.github.io/SciSpaCy/) |
### Label Scheme
<details>
<summary>View label scheme (102 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `acomp`, `advcl`, `advmod`, `amod`, `amod@nmod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj`, `dative`, `dep`, `det`, `det:predet`, `dobj`, `expl`, `intj`, `mark`, `meta`, `mwe`, `neg`, `nmod`, `nmod:npmod`, `nmod:poss`, `nmod:tmod`, `nsubj`, `nsubjpass`, `nummod`, `parataxis`, `pcomp`, `pobj`, `preconj`, `predet`, `prep`, `punct`, `quantmod`, `xcomp` |
| **`ner`** | `CELL_LINE`, `CELL_TYPE`, `DNA`, `PROTEIN`, `RNA` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 0.00 |
| `LEMMA_ACC` | 0.00 |
| `DEP_UAS` | 0.00 |
| `DEP_LAS` | 0.00 |
| `DEP_LAS_PER_TYPE` | 0.00 |
| `SENTS_P` | 0.00 |
| `SENTS_R` | 0.00 |
| `SENTS_F` | 0.00 |
| `ENTS_F` | 76.16 |
| `ENTS_P` | 76.96 |
| `ENTS_R` | 75.37 |
| `NER_LOSS` | 1718993.54 | |
RichardErkhov/maxfrax_-_Llama-3.2-3B-Instruct-ConvFinQA-1e-4bits | RichardErkhov | "2025-01-11T09:44:09" | 7 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-11T09:43:05" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-ConvFinQA-1e - bnb 4bits
- Model creator: https://huggingface.co/maxfrax/
- Original model: https://huggingface.co/maxfrax/Llama-3.2-3B-Instruct-ConvFinQA-1e/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jellon/Lyra4-Gutenberg-12B-6bpw | Jellon | "2024-10-10T12:17:35" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:Sao10K/MN-12B-Lyra-v4",
"base_model:quantized:Sao10K/MN-12B-Lyra-v4",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | "2024-10-10T10:59:07" | ---
license: cc-by-nc-4.0
library_name: transformers
base_model:
- Sao10K/MN-12B-Lyra-v4
datasets:
- jondurbin/gutenberg-dpo-v0.1
model-index:
- name: Lyra4-Gutenberg-12B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 22.12
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 11.71
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.17
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.97
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.57
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
---
6bpw exl2 quant of: https://huggingface.co/nbeerbower/Lyra4-Gutenberg-12B
# Lyra4-Gutenberg-12B
[Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1).
### Method
ORPO Finetuned using an RTX 3090 + 4060 Ti for 3 epochs.
[Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__Lyra4-Gutenberg-12B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.63|
|IFEval (0-Shot) |22.12|
|BBH (3-Shot) |34.24|
|MATH Lvl 5 (4-Shot)|11.71|
|GPQA (0-shot) | 9.17|
|MuSR (0-shot) |11.97|
|MMLU-PRO (5-shot) |28.57|
|
singhjagpreet/llama3.1_8b-Gurmukhi-Q8_0-GGUF | singhjagpreet | "2025-03-25T03:34:01" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:singhjagpreet/llama3.1_8b-Gurmukhi",
"base_model:quantized:singhjagpreet/llama3.1_8b-Gurmukhi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-25T03:33:55" | ---
base_model: singhjagpreet/llama3.1_8b-Gurmukhi
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# singhjagpreet/llama3.1_8b-Gurmukhi-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`singhjagpreet/llama3.1_8b-Gurmukhi`](https://huggingface.co/singhjagpreet/llama3.1_8b-Gurmukhi) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/singhjagpreet/llama3.1_8b-Gurmukhi) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora llama3.1_8b-Gurmukhi-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora llama3.1_8b-Gurmukhi-q8_0.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
Triangle104/Llama3.1-8B-PlumChat-Q6_K-GGUF | Triangle104 | "2025-01-12T22:33:51" | 30 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"conversational",
"chat",
"instruct",
"llama-cpp",
"gguf-my-repo",
"base_model:sequelbox/Llama3.1-8B-PlumChat",
"base_model:quantized:sequelbox/Llama3.1-8B-PlumChat",
"license:llama3.1",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2025-01-12T22:33:18" | ---
library_name: transformers
tags:
- mergekit
- merge
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: sequelbox/Llama3.1-8B-PlumChat
license: llama3.1
model-index:
- name: Llama3.1-8B-PlumChat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-Shot)
type: Winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.22
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 42.43
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 13.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 3.1
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.01
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.77
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 12.52
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
name: Open LLM Leaderboard
---
# Triangle104/Llama3.1-8B-PlumChat-Q6_K-GGUF
This model was converted to GGUF format from [`sequelbox/Llama3.1-8B-PlumChat`](https://huggingface.co/sequelbox/Llama3.1-8B-PlumChat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sequelbox/Llama3.1-8B-PlumChat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama3.1-8B-PlumChat-Q6_K-GGUF --hf-file llama3.1-8b-plumchat-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama3.1-8B-PlumChat-Q6_K-GGUF --hf-file llama3.1-8b-plumchat-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama3.1-8B-PlumChat-Q6_K-GGUF --hf-file llama3.1-8b-plumchat-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama3.1-8B-PlumChat-Q6_K-GGUF --hf-file llama3.1-8b-plumchat-q6_k.gguf -c 2048
```
|
tomaszki/mistral-35 | tomaszki | "2024-04-17T18:05:17" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-17T18:02:53" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_monkeypox_gpt4o_1_2e-5_16_undersampling_0.4 | isspek | "2025-03-23T10:55:38" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-28T17:45:27" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hanafuusen2001/MajicMix | hanafuusen2001 | "2023-06-14T07:09:27" | 0 | 13 | null | [
"license:other",
"region:us"
] | null | "2023-05-03T10:54:52" | ---
license: other
---
# 聲明 Disclaimer
本資料夾中的模型不是我所製作,版權歸原作者所有(各模型版權詳見 http://www.civitai.com 所示)。我上傳至本資料夾僅爲方便在綫抽取資源,并非盈利。
The models in this folder are not made by me, and the copyright belongs to the original author (see http://www.civitai.com for details on the copyright of each model). I uploaded to this folder only for the convenience of extracting resources online, not for profit.
# 模型列表 List of Models
本資料夾中所有模型詳見下表。
All the models in this folder are detailed in the table below.
| 模型名稱 Model Name | Civitai 頁面鏈接 Civitai Page Link | Civitai 下載鏈接 Civitai Download Link | 百度網盤 Baidu Netdisk |
|----------------------|--------------------|--------------------|--------------------|
|majicmixRealistic_v6.safetensors |https://civitai.com/models/43331?modelVersionId=94640 |https://civitai.com/api/download/models/94640 | |
|majicmixRealistic_v5.safetensors |https://civitai.com/models/43331?modelVersionId=82446 |https://civitai.com/api/download/models/82446 |https://pan.baidu.com/s/1B1EgH3nj0OXsK8xDzx0HIQ?pwd=0000 |
|majicmixRealistic_v4.safetensors |https://civitai.com/models/43331?modelVersionId=55911 |https://civitai.com/api/download/models/55911 |https://pan.baidu.com/s/1Huf0qr4gbdG3Hrsa2JKfYg?pwd=0000 |
|majicmixRealistic_v3.safetensors |https://civitai.com/models/43331?modelVersionId=55620 |https://civitai.com/api/download/models/55620 |https://pan.baidu.com/s/1tvtmiDP_95B9qkSKwPVmCA?pwd=0000 |
|majicmixRealistic_v2.safetensors |https://civitai.com/models/43331?modelVersionId=48289 |https://civitai.com/api/download/models/48289 |https://pan.baidu.com/s/18SA-rUv5V6Bzvt5giDhiRw?pwd=0000 |
## MajicMix Realistic V5
<img src="https://img1.wsimg.com/isteam/ip/062334e1-a8fb-4784-b30a-5b8d15b1aaeb/00020-2238982761.png" width="512" height="">
## MajicMix Realistic V4
<img src="https://img1.wsimg.com/isteam/ip/062334e1-a8fb-4784-b30a-5b8d15b1aaeb/00008-91547360.png" width="512" height="">
## MajicMix Realistic V3
<img src="https://img1.wsimg.com/isteam/ip/062334e1-a8fb-4784-b30a-5b8d15b1aaeb/majicmixRealistic_v3_01.png" width="512" height="">
## MajicMix Realistic V2
<img src="https://img1.wsimg.com/isteam/ip/062334e1-a8fb-4784-b30a-5b8d15b1aaeb/majicmixRealistic_v2_01.png" width="512" height=""> |
RWKV/rwkv-raven-1b5 | RWKV | "2023-05-15T10:08:58" | 1,918 | 12 | transformers | [
"transformers",
"pytorch",
"rwkv",
"text-generation",
"dataset:EleutherAI/pile",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-04T14:57:11" | ---
datasets:
- EleutherAI/pile
---

# Model card for RWKV-4 | 1B5 parameters chat version (Raven)
RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064).
# Table of contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
## TL;DR
Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM)
> RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
## Model Details
The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration.
## Usage
### Convert the raw weights to the HF format
You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights.
```bash
python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv
```
### Generate text
You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios:
The "Raven" models needs to be prompted in a specific way, learn more about that [in the integration blogpost](https://huggingface.co/blog/rwkv).
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
### Running the model on a single GPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5").to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
</details>
### Running the model in half-precision, on GPU
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
### Running the model multiple GPUs
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
## Citation
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/) |
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,427