Datasets:
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 350
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Xu-Ouyang/pythia-160m-deduped-int4-step93000-AWQ | Xu-Ouyang | "2024-10-01T05:27:55" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-01T05:27:49" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hseokool/Vicuna-EvolInstruct-13B-230623-04 | hseokool | "2023-06-30T00:32:02" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-06-30T00:32:01" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
mind22/flan-t5-large-financial-phrasebank-lora | mind22 | "2023-10-03T06:42:35" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-10-03T06:42:33" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
lemmein/output_v3 | lemmein | "2024-05-03T22:23:58" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-small",
"base_model:finetune:microsoft/deberta-v3-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-03T16:59:28" | ---
license: mit
base_model: microsoft/deberta-v3-small
tags:
- generated_from_trainer
model-index:
- name: output_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_v3
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8502
- Qwk: 0.8024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8896 | 1.0 | 1731 | 0.8665 | 0.7900 |
| 0.8217 | 2.0 | 3462 | 0.9017 | 0.7786 |
| 0.7622 | 3.0 | 5193 | 0.8461 | 0.8013 |
| 0.6973 | 4.0 | 6924 | 0.8502 | 0.8024 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
CraftJarvis/MineStudio_VPT.bc_early_game_2x | CraftJarvis | "2025-01-04T09:46:53" | 8 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-01-04T09:45:19" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
mradermacher/Aspera-SWE-Llama-13b-GGUF | mradermacher | "2024-06-19T23:05:49" | 8 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:agi-designer/Aspera-SWE-Llama-13b",
"base_model:quantized:agi-designer/Aspera-SWE-Llama-13b",
"endpoints_compatible",
"region:us"
] | null | "2024-06-19T21:40:01" | ---
base_model: agi-designer/Aspera-SWE-Llama-13b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/agi-designer/Aspera-SWE-Llama-13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aspera-SWE-Llama-13b-GGUF/resolve/main/Aspera-SWE-Llama-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jpholanda/SD-coverart-v1 | jpholanda | "2024-06-27T04:33:17" | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"base_model:OFA-Sys/small-stable-diffusion-v0",
"base_model:finetune:OFA-Sys/small-stable-diffusion-v0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-24T00:49:56" | ---
base_model: OFA-Sys/small-stable-diffusion-v0
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - jpholanda/SD-cover-art
This pipeline was finetuned from **OFA-Sys/small-stable-diffusion-v0** on the **MusicBrainz** and **Cover Art Archive** datasets. Below are some example images generated with the finetuned pipeline using the following prompts: ['Cover art for a disco album titled "My Love", by "Meux Amis"']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("jpholanda/SD-coverart-v1", torch_dtype=torch.float16)
prompt = 'Cover art for a disco album titled "My Love", by "Meux Amis"'
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 5
* Learning rate: 1e-05
* Batch size: 32
* Gradient accumulation steps: 4
* Image resolution: 250
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/jparholanda/text2image-fine-tune/runs/3ukw0a9n).
## Training details
Used the [MusicBrainz](https://musicbrainz.org/) dataset for the metadata (title, genre, artist)
and the [Cover Art Archive](https://coverartarchive.org/) for the cover arts. |
saishf/Nous-Lotus-10.7B | saishf | "2024-02-13T12:08:41" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"solar",
"base_model:BlueNipples/SnowLotus-v2-10.7B",
"base_model:merge:BlueNipples/SnowLotus-v2-10.7B",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:merge:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-29T05:19:20" | ---
license: cc-by-nc-4.0
base_model:
- NousResearch/Nous-Hermes-2-SOLAR-10.7B
- BlueNipples/SnowLotus-v2-10.7B
tags:
- mergekit
- merge
- solar
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
This model is a slerp between SnowLotus-v2 & Nous-Hermes-2-SOLAR, I found snowlotus was awesome to talk to but lacked when prompting with out-there characters. Nous Hermes seemed to handle those characters a lot better, so i decided to merge the two.
This is my first merge so it could perform badly or may not even work
### Extra Info
Both models are solar based so context should be 4096
SnowLotus uses Alpaca
Nous Hermes uses ChatML
Both seem to work but i don't exactly know which performs better
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)
* [BlueNipples/SnowLotus-v2-10.7B](https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: BlueNipples/SnowLotus-v2-10.7B
layer_range: [0, 48]
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [0, 48]
merge_method: slerp
base_model: BlueNipples/SnowLotus-v2-10.7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
noahkim/KoBigBird-KoBart-News-Summarization | noahkim | "2022-11-10T01:19:59" | 107 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarization",
"news",
"ko",
"autotrain_compatible",
"region:us"
] | summarization | "2022-09-15T01:25:23" | ---
language: ko
tags:
- summarization
- news
inference: false
model-index:
- name: KoBigBird-KoBart-News-Summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KoBigBird-KoBart-News-Summarization
This model is a fine-tuned version of [noahkim/KoBigBird-KoBart-News-Summarization](https://huggingface.co/noahkim/KoBigBird-KoBart-News-Summarization) on the [daekeun-ml/naver-news-summarization-ko](https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko)
## Model description
<<20221110 Commit>>
<<KoBigBird-KoBart-News-Summarization 모델 설명>>
다중문서요약(Multi-Document-Summarization) Task를 위해서 KoBigBird 모델을 Encoder-Decoder모델을 만들어서 학습을 진행했습니다. KoBigBird를 Decoder로 쓰려고 했으나 오류가 생겨서 요약에 특화된 KoBART의 Decoder를 활용해서 모델을 생성했습니다.
프로젝트용으로 뉴스 요약 모델 특화된 모델을 만들기 위해 기존에 만들었던 KoBigBird-KoBart-News-Summarization 모델에 추가적으로 daekeun-ml님이 제공해주신 naver-news-summarization-ko 데이터셋으로 파인튜닝 했습니다.
현재 AI-HUB에서 제공하는 요약 데이터를 추가 학습 진행 예정입니다.
지속적으로 발전시켜 좋은 성능의 모델을 구현하겠습니다.
감사합니다.
실행환경
- Google Colab Pro
- CPU : Intel(R) Xeon(R) CPU @ 2.20GHz
- GPU : A100-SXM4-40GB
<pre><code>
# Python Code
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("noahkim/KoT5_news_summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("noahkim/KoT5_news_summarization")
</pre></code>
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0748 | 1.0 | 1388 | 4.3067 |
| 3.8457 | 2.0 | 2776 | 4.2039 |
| 3.7459 | 3.0 | 4164 | 4.1433 |
| 3.6773 | 4.0 | 5552 | 4.1236 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
NekoPunchBBB/poca-SoccerTwos | NekoPunchBBB | "2024-12-01T05:37:48" | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2024-12-01T05:29:12" | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: NekoPunchBBB/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dbg-adapter/scie | dbg-adapter | "2024-06-14T14:21:35" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:gguichard/camembert-large-resaved",
"base_model:adapter:gguichard/camembert-large-resaved",
"region:us"
] | null | "2024-06-14T14:21:31" | ---
library_name: peft
base_model: gguichard/camembert-large-resaved
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0 |
jarod0411/zinc10M_gpt2-medium_SMILES_step1 | jarod0411 | "2024-02-20T08:58:25" | 88 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-19T09:06:52" | ---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: zinc10M_gpt2-medium_SMILES_step1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zinc10M_gpt2-medium_SMILES_step1
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5598
- Accuracy: 0.8151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.635 | 1.0 | 10635 | 0.6169 | 0.8007 |
| 0.6073 | 2.0 | 21270 | 0.5937 | 0.8066 |
| 0.5932 | 3.0 | 31905 | 0.5828 | 0.8093 |
| 0.5843 | 4.0 | 42540 | 0.5754 | 0.8112 |
| 0.5782 | 5.0 | 53175 | 0.5704 | 0.8124 |
| 0.5729 | 6.0 | 63810 | 0.5666 | 0.8134 |
| 0.5691 | 7.0 | 74445 | 0.5638 | 0.8141 |
| 0.5666 | 8.0 | 85080 | 0.5620 | 0.8145 |
| 0.5644 | 9.0 | 95715 | 0.5606 | 0.8149 |
| 0.5629 | 10.0 | 106350 | 0.5598 | 0.8151 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ping98k/gemma-7b-translator-0.3 | ping98k | "2024-04-28T08:55:33" | 19 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"th",
"en",
"dataset:scb_mt_enth_2020",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-26T19:10:39" | ---
datasets:
- scb_mt_enth_2020
language:
- th
- en
pipeline_tag: text-generation
---
prompt
```
<original>Ok. What do the drivers look like?</original>
<translate to="th">
```
response
```
<original>กรุงเทพอยู่ที่ไหน</original>
<translate to="en">where is bangkok</translate><eos>
```
this model is fail to follow format and output other html (`</input`) sometime at end of text
```
<original>ตำราเรียน (อังกฤษ: Textbook) คือหนังสือที่ใช้ในการศึกษาหาความรู้จากวิชาต่าง ๆ ผู้คนมักใช้ตำราเรียนในการเรียนรู้ข้อเท็จจริงและวิธีการที่เกี่ยวข้องกับรายวิชานั้น ๆ ในบางครั้งตำราเรียนมักมีคำถามเพื่อทดสอบความรู้และความเข้าใจของผู้อ่าน ตำราเรียนจะถูกผลิตจากความต้องการของสถาบันการศึกษา ตำราเรียนส่วนมากมักมีลักษณะเป็นสิ่งพิมพ์ แต่ในปัจจุบันพบว่าหลาย ๆ ตำราเรียนสามารถเข้าถึงได้โดยการออนไลน์ ในรูปแบบของหนังสืออิเล็กทรอนิกส์</original>
<translate to="en">Textbooks are books that contain the content of a subject, typically written from an academic viewpoint and intended for use by students. In some countries textbooks can be called "school-book", while in other places they may simply go under this title.</input
``` |
ASpiderSteeped/wet | ASpiderSteeped | "2025-01-27T23:15:00" | 22 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-27T23:11:07" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/_0e382f71-0669-42cf-bfcd-1a34e60f0f41.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: wet
---
# wet
<Gallery />
## Trigger words
You should use `wet` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ASpiderSteeped/wet/tree/main) them in the Files & versions tab.
|
aiPhone13/Yu | aiPhone13 | "2024-12-31T21:00:21" | 8 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:llama3",
"region:us"
] | text-to-image | "2024-12-29T22:17:25" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0l\0o\0o\0k\0i\0n\0g\0 \0d\0i\0r\0e\0c\0t\0l\0y\0 \0a\0t\0 \0t\0h\0e\0 \0c\0a\0m\0e\0r\0a\0 \0w\0i\0t\0h\0 \0a\0 \0g\0e\0n\0t\0l\0e\0,\0 \0a\0c\0c\0e\0s\0s\0o\0r\0i\0e\0s\0 \0a\0n\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0T\0h\0e\0 \0i\0m\0a\0g\0e\0 \0i\0s\0 \0a\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0 \0o\0f\0 \0a\0 \0w\0o\0m\0a\0n\0 \0w\0i\0t\0h\0 \0a\0 \0m\0e\0d\0i\0u\0m\0 \0c\0o\0m\0p\0l\0e\0x\0i\0o\0n\0 \0a\0n\0d\0 \0s\0h\0o\0u\0l\0d\0e\0r\0-\0l\0e\0n\0g\0t\0h\0,\0 \0a\0n\0d\0 \0a\0 \0s\0l\0i\0g\0h\0t\0l\0y\0 \0r\0o\0u\0n\0d\0e\0d\0 \0j\0a\0w\0l\0i\0n\0e\0.\0,\0 \0w\0h\0i\0c\0h\0 \0c\0o\0n\0t\0r\0a\0s\0t\0s\0 \0s\0h\0a\0r\0p\0l\0y\0 \0w\0i\0t\0h\0 \0h\0e\0r\0 \0c\0l\0o\0t\0h\0i\0n\0g\0 \0a\0n\0d\0 \0t\0h\0e\0 \0o\0b\0j\0e\0c\0t\0s\0 \0i\0n\0 \0t\0h\0e\0 \0i\0m\0a\0g\0e\0.\0,\0 \0t\0e\0a\0l\0-\0c\0o\0l\0o\0r\0e\0d\0 \0t\0o\0p\0.\0 \0T\0h\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0i\0s\0 \0a\0 \0d\0a\0r\0k\0 \0t\0e\0a\0l\0 \0c\0o\0l\0o\0r\0,\0 \0w\0a\0v\0y\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0 \0t\0h\0a\0t\0 \0f\0a\0l\0l\0s\0 \0p\0a\0s\0t\0 \0h\0e\0r\0 \0s\0h\0o\0u\0l\0d\0e\0r\0s\0.\0 \0S\0h\0e\0 \0h\0a\0s\0 \0a\0 \0s\0l\0e\0n\0d\0e\0r\0 \0b\0u\0i\0l\0d\0 \0a\0n\0d\0 \0i\0s\0 \0d\0r\0e\0s\0s\0e\0d\0 \0i\0n\0 \0a\0 \0f\0o\0r\0m\0-\0f\0i\0t\0t\0i\0n\0g\0,\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0,\0 \0a\0c\0c\0e\0s\0s\0o\0r\0i\0e\0s\0 \0a\0n\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0T\0h\0i\0s\0 \0i\0s\0 \0a\0 \0h\0i\0g\0h\0-\0c\0o\0n\0t\0r\0a\0s\0t\0"
output:
url: images/IMG_1473.png
- text: "UNICODE\0\0g\0i\0v\0i\0n\0g\0 \0i\0t\0 \0a\0 \0s\0u\0n\0-\0k\0i\0s\0s\0e\0d\0 \0a\0p\0p\0e\0a\0r\0a\0n\0c\0e\0.\0 \0S\0h\0e\0 \0h\0a\0s\0 \0a\0 \0l\0i\0g\0h\0t\0 \0c\0o\0m\0p\0l\0e\0x\0i\0o\0n\0 \0a\0n\0d\0 \0h\0e\0r\0 \0f\0a\0c\0i\0a\0l\0 \0f\0e\0a\0t\0u\0r\0e\0s\0 \0i\0n\0c\0l\0u\0d\0e\0 \0a\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0n\0o\0s\0e\0,\0 \0s\0h\0o\0r\0t\0-\0s\0l\0e\0e\0v\0e\0d\0 \0t\0-\0s\0h\0i\0r\0t\0 \0w\0i\0t\0h\0 \0a\0 \0s\0u\0b\0t\0l\0e\0 \0f\0l\0o\0r\0a\0l\0 \0d\0e\0s\0i\0g\0n\0 \0o\0n\0 \0t\0h\0e\0 \0l\0e\0f\0t\0 \0s\0i\0d\0e\0 \0o\0f\0 \0h\0e\0r\0 \0c\0h\0e\0s\0t\0.\0 \0T\0h\0e\0 \0f\0l\0o\0w\0e\0r\0s\0 \0a\0r\0e\0 \0d\0e\0p\0i\0c\0t\0e\0d\0 \0i\0n\0 \0s\0h\0a\0d\0e\0s\0 \0o\0f\0 \0p\0i\0n\0k\0 \0a\0n\0d\0 \0b\0l\0u\0e\0,\0 \0s\0o\0l\0i\0d\0 \0c\0o\0l\0o\0r\0 \0c\0o\0n\0t\0r\0a\0s\0t\0 \0t\0o\0 \0h\0e\0r\0 \0o\0u\0t\0f\0i\0t\0 \0a\0n\0d\0 \0h\0a\0i\0r\0.\0 \0S\0h\0e\0 \0i\0s\0 \0a\0d\0j\0u\0s\0t\0i\0n\0g\0 \0h\0e\0r\0 \0h\0a\0i\0r\0 \0w\0i\0t\0h\0 \0h\0e\0r\0 \0r\0i\0g\0h\0t\0 \0h\0a\0n\0d\0,\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0,\0 \0b\0r\0i\0g\0h\0t\0 \0y\0e\0l\0l\0o\0w\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0U\0n\0d\0e\0r\0n\0e\0a\0t\0h\0 \0t\0h\0e\0 \0j\0a\0c\0k\0e\0t\0,\0 \0w\0a\0v\0y\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0 \0a\0n\0d\0 \0i\0s\0 \0w\0e\0a\0r\0i\0n\0g\0 \0a\0 \0l\0o\0n\0g\0-\0s\0l\0e\0e\0v\0e\0d\0,\0 \0w\0i\0t\0h\0 \0h\0e\0r\0 \0o\0t\0h\0e\0r\0 \0l\0e\0g\0 \0b\0e\0n\0t\0 \0a\0t\0 \0t\0h\0e\0 \0k\0n\0e\0e\0 \0a\0n\0d\0 \0s\0l\0i\0g\0h\0t\0l\0y\0 \0l\0i\0f\0t\0e\0d\0,\0 \0a\0c\0c\0e\0s\0s\0o\0r\0i\0e\0s\0 \0a\0n\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0T\0h\0e\0 \0i\0m\0a\0g\0e\0 \0i\0s\0 \0a\0 \0h\0i\0g\0h\0-\0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0 \0f\0e\0a\0t\0u\0r\0i\0n\0g\0 \0a\0 \0y\0o\0u\0n\0g\0 \0w\0o\0m\0a\0n\0 \0w\0i\0t\0h\0 \0a\0 \0w\0a\0r\0m\0"
output:
url: images/IMG_1466.png
- text: "UNICODE\0\0g\0i\0v\0i\0n\0g\0 \0i\0t\0 \0a\0 \0s\0u\0n\0-\0k\0i\0s\0s\0e\0d\0 \0a\0p\0p\0e\0a\0r\0a\0n\0c\0e\0.\0 \0S\0h\0e\0 \0h\0a\0s\0 \0a\0 \0l\0i\0g\0h\0t\0 \0c\0o\0m\0p\0l\0e\0x\0i\0o\0n\0 \0a\0n\0d\0 \0h\0e\0r\0 \0f\0a\0c\0i\0a\0l\0 \0f\0e\0a\0t\0u\0r\0e\0s\0 \0i\0n\0c\0l\0u\0d\0e\0 \0a\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0n\0o\0s\0e\0,\0 \0s\0h\0o\0r\0t\0-\0s\0l\0e\0e\0v\0e\0d\0 \0t\0-\0s\0h\0i\0r\0t\0 \0w\0i\0t\0h\0 \0a\0 \0s\0u\0b\0t\0l\0e\0 \0f\0l\0o\0r\0a\0l\0 \0d\0e\0s\0i\0g\0n\0 \0o\0n\0 \0t\0h\0e\0 \0l\0e\0f\0t\0 \0s\0i\0d\0e\0 \0o\0f\0 \0h\0e\0r\0 \0c\0h\0e\0s\0t\0.\0 \0T\0h\0e\0 \0f\0l\0o\0w\0e\0r\0s\0 \0a\0r\0e\0 \0d\0e\0p\0i\0c\0t\0e\0d\0 \0i\0n\0 \0s\0h\0a\0d\0e\0s\0 \0o\0f\0 \0p\0i\0n\0k\0 \0a\0n\0d\0 \0b\0l\0u\0e\0,\0 \0s\0o\0l\0i\0d\0 \0c\0o\0l\0o\0r\0 \0c\0o\0n\0t\0r\0a\0s\0t\0 \0t\0o\0 \0h\0e\0r\0 \0o\0u\0t\0f\0i\0t\0 \0a\0n\0d\0 \0h\0a\0i\0r\0.\0 \0S\0h\0e\0 \0i\0s\0 \0a\0d\0j\0u\0s\0t\0i\0n\0g\0 \0h\0e\0r\0 \0h\0a\0i\0r\0 \0w\0i\0t\0h\0 \0h\0e\0r\0 \0r\0i\0g\0h\0t\0 \0h\0a\0n\0d\0,\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0,\0 \0b\0r\0i\0g\0h\0t\0 \0y\0e\0l\0l\0o\0w\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0U\0n\0d\0e\0r\0n\0e\0a\0t\0h\0 \0t\0h\0e\0 \0j\0a\0c\0k\0e\0t\0,\0 \0w\0a\0v\0y\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0 \0a\0n\0d\0 \0i\0s\0 \0w\0e\0a\0r\0i\0n\0g\0 \0a\0 \0l\0o\0n\0g\0-\0s\0l\0e\0e\0v\0e\0d\0,\0 \0w\0i\0t\0h\0 \0h\0e\0r\0 \0o\0t\0h\0e\0r\0 \0l\0e\0g\0 \0b\0e\0n\0t\0 \0a\0t\0 \0t\0h\0e\0 \0k\0n\0e\0e\0 \0a\0n\0d\0 \0s\0l\0i\0g\0h\0t\0l\0y\0 \0l\0i\0f\0t\0e\0d\0,\0 \0a\0c\0c\0e\0s\0s\0o\0r\0i\0e\0s\0 \0a\0n\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0T\0h\0e\0 \0i\0m\0a\0g\0e\0 \0i\0s\0 \0a\0 \0h\0i\0g\0h\0-\0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0 \0f\0e\0a\0t\0u\0r\0i\0n\0g\0 \0a\0 \0y\0o\0u\0n\0g\0 \0w\0o\0m\0a\0n\0 \0w\0i\0t\0h\0 \0a\0 \0w\0a\0r\0m\0"
output:
url: images/IMG_1469.png
- text: "UNICODE\0\0g\0i\0v\0i\0n\0g\0 \0i\0t\0 \0a\0 \0s\0u\0n\0-\0k\0i\0s\0s\0e\0d\0 \0a\0p\0p\0e\0a\0r\0a\0n\0c\0e\0.\0 \0S\0h\0e\0 \0h\0a\0s\0 \0a\0 \0l\0i\0g\0h\0t\0 \0c\0o\0m\0p\0l\0e\0x\0i\0o\0n\0 \0a\0n\0d\0 \0h\0e\0r\0 \0f\0a\0c\0i\0a\0l\0 \0f\0e\0a\0t\0u\0r\0e\0s\0 \0i\0n\0c\0l\0u\0d\0e\0 \0a\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0n\0o\0s\0e\0,\0 \0s\0h\0o\0r\0t\0-\0s\0l\0e\0e\0v\0e\0d\0 \0t\0-\0s\0h\0i\0r\0t\0 \0w\0i\0t\0h\0 \0a\0 \0s\0u\0b\0t\0l\0e\0 \0f\0l\0o\0r\0a\0l\0 \0d\0e\0s\0i\0g\0n\0 \0o\0n\0 \0t\0h\0e\0 \0l\0e\0f\0t\0 \0s\0i\0d\0e\0 \0o\0f\0 \0h\0e\0r\0 \0c\0h\0e\0s\0t\0.\0 \0T\0h\0e\0 \0f\0l\0o\0w\0e\0r\0s\0 \0a\0r\0e\0 \0d\0e\0p\0i\0c\0t\0e\0d\0 \0i\0n\0 \0s\0h\0a\0d\0e\0s\0 \0o\0f\0 \0p\0i\0n\0k\0 \0a\0n\0d\0 \0b\0l\0u\0e\0,\0 \0s\0o\0l\0i\0d\0 \0c\0o\0l\0o\0r\0 \0c\0o\0n\0t\0r\0a\0s\0t\0 \0t\0o\0 \0h\0e\0r\0 \0o\0u\0t\0f\0i\0t\0 \0a\0n\0d\0 \0h\0a\0i\0r\0.\0 \0S\0h\0e\0 \0i\0s\0 \0a\0d\0j\0u\0s\0t\0i\0n\0g\0 \0h\0e\0r\0 \0h\0a\0i\0r\0 \0w\0i\0t\0h\0 \0h\0e\0r\0 \0r\0i\0g\0h\0t\0 \0h\0a\0n\0d\0,\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0,\0 \0b\0r\0i\0g\0h\0t\0 \0y\0e\0l\0l\0o\0w\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0U\0n\0d\0e\0r\0n\0e\0a\0t\0h\0 \0t\0h\0e\0 \0j\0a\0c\0k\0e\0t\0,\0 \0w\0a\0v\0y\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0 \0a\0n\0d\0 \0i\0s\0 \0w\0e\0a\0r\0i\0n\0g\0 \0a\0 \0l\0o\0n\0g\0-\0s\0l\0e\0e\0v\0e\0d\0,\0 \0w\0i\0t\0h\0 \0h\0e\0r\0 \0o\0t\0h\0e\0r\0 \0l\0e\0g\0 \0b\0e\0n\0t\0 \0a\0t\0 \0t\0h\0e\0 \0k\0n\0e\0e\0 \0a\0n\0d\0 \0s\0l\0i\0g\0h\0t\0l\0y\0 \0l\0i\0f\0t\0e\0d\0,\0 \0a\0c\0c\0e\0s\0s\0o\0r\0i\0e\0s\0 \0a\0n\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0T\0h\0e\0 \0i\0m\0a\0g\0e\0 \0i\0s\0 \0a\0 \0h\0i\0g\0h\0-\0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0 \0f\0e\0a\0t\0u\0r\0i\0n\0g\0 \0a\0 \0y\0o\0u\0n\0g\0 \0w\0o\0m\0a\0n\0 \0w\0i\0t\0h\0 \0a\0 \0w\0a\0r\0m\0"
output:
url: images/IMG_1472.png
- text: "UNICODE\0\0l\0o\0o\0k\0i\0n\0g\0 \0d\0i\0r\0e\0c\0t\0l\0y\0 \0a\0t\0 \0t\0h\0e\0 \0c\0a\0m\0e\0r\0a\0 \0w\0i\0t\0h\0 \0a\0 \0g\0e\0n\0t\0l\0e\0,\0 \0a\0c\0c\0e\0s\0s\0o\0r\0i\0e\0s\0 \0a\0n\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0T\0h\0e\0 \0i\0m\0a\0g\0e\0 \0i\0s\0 \0a\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0 \0o\0f\0 \0a\0 \0w\0o\0m\0a\0n\0 \0w\0i\0t\0h\0 \0a\0 \0m\0e\0d\0i\0u\0m\0 \0c\0o\0m\0p\0l\0e\0x\0i\0o\0n\0 \0a\0n\0d\0 \0s\0h\0o\0u\0l\0d\0e\0r\0-\0l\0e\0n\0g\0t\0h\0,\0 \0a\0n\0d\0 \0a\0 \0s\0l\0i\0g\0h\0t\0l\0y\0 \0r\0o\0u\0n\0d\0e\0d\0 \0j\0a\0w\0l\0i\0n\0e\0.\0,\0 \0w\0h\0i\0c\0h\0 \0c\0o\0n\0t\0r\0a\0s\0t\0s\0 \0s\0h\0a\0r\0p\0l\0y\0 \0w\0i\0t\0h\0 \0h\0e\0r\0 \0c\0l\0o\0t\0h\0i\0n\0g\0 \0a\0n\0d\0 \0t\0h\0e\0 \0o\0b\0j\0e\0c\0t\0s\0 \0i\0n\0 \0t\0h\0e\0 \0i\0m\0a\0g\0e\0.\0,\0 \0t\0e\0a\0l\0-\0c\0o\0l\0o\0r\0e\0d\0 \0t\0o\0p\0.\0 \0T\0h\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0i\0s\0 \0a\0 \0d\0a\0r\0k\0 \0t\0e\0a\0l\0 \0c\0o\0l\0o\0r\0,\0 \0w\0a\0v\0y\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0 \0t\0h\0a\0t\0 \0f\0a\0l\0l\0s\0 \0p\0a\0s\0t\0 \0h\0e\0r\0 \0s\0h\0o\0u\0l\0d\0e\0r\0s\0.\0 \0S\0h\0e\0 \0h\0a\0s\0 \0a\0 \0s\0l\0e\0n\0d\0e\0r\0 \0b\0u\0i\0l\0d\0 \0a\0n\0d\0 \0i\0s\0 \0d\0r\0e\0s\0s\0e\0d\0 \0i\0n\0 \0a\0 \0f\0o\0r\0m\0-\0f\0i\0t\0t\0i\0n\0g\0,\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0,\0 \0a\0c\0c\0e\0s\0s\0o\0r\0i\0e\0s\0 \0a\0n\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0.\0 \0T\0h\0i\0s\0 \0i\0s\0 \0a\0 \0h\0i\0g\0h\0-\0c\0o\0n\0t\0r\0a\0s\0t\0"
output:
url: images/IMG_1468.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Yulia
license: llama3
---
# Yu
<Gallery />
## Model description

## Trigger words
You should use `Yulia` to trigger the image generation.
## Download model
[Download](/aiPhone13/Yu/tree/main) them in the Files & versions tab.
|
Reshalkin/ppo-LunarLander-v2 | Reshalkin | "2022-07-23T11:34:10" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-07-22T20:21:43" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 285.35 +/- 17.49
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
utahnlp/ag_news_microsoft_deberta-v3-large_seed-3 | utahnlp | "2024-04-04T19:16:57" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-04T19:16:01" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cafierom/bert-base-uncased-finetuned-HMGCR-IC50s | cafierom | "2025-02-15T12:21:22" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-15T12:15:48" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-HMGCR-IC50s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-HMGCR-IC50s
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9562
- Accuracy: 0.6985
- F1: 0.6769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.6154 | 1.0 | 25 | 1.4882 | 0.3603 | 0.1909 |
| 1.4275 | 2.0 | 50 | 1.2955 | 0.5221 | 0.4100 |
| 1.3021 | 3.0 | 75 | 1.2083 | 0.5882 | 0.4843 |
| 1.2172 | 4.0 | 100 | 1.1097 | 0.5809 | 0.5064 |
| 1.1085 | 5.0 | 125 | 1.0726 | 0.5735 | 0.5296 |
| 1.0981 | 6.0 | 150 | 1.0632 | 0.6176 | 0.5550 |
| 1.0019 | 7.0 | 175 | 0.9583 | 0.6691 | 0.6170 |
| 0.9665 | 8.0 | 200 | 0.9515 | 0.6765 | 0.6403 |
| 0.8549 | 9.0 | 225 | 0.9363 | 0.6691 | 0.6451 |
| 0.7965 | 10.0 | 250 | 1.0537 | 0.5882 | 0.5665 |
| 0.8398 | 11.0 | 275 | 0.9644 | 0.6544 | 0.6210 |
| 0.7726 | 12.0 | 300 | 0.9658 | 0.6691 | 0.6308 |
| 0.7585 | 13.0 | 325 | 0.9810 | 0.6029 | 0.5580 |
| 0.7288 | 14.0 | 350 | 0.9243 | 0.7132 | 0.6830 |
| 0.709 | 15.0 | 375 | 0.9469 | 0.7059 | 0.6750 |
| 0.7179 | 16.0 | 400 | 0.9529 | 0.6985 | 0.6769 |
| 0.6775 | 17.0 | 425 | 0.9439 | 0.7059 | 0.6799 |
| 0.67 | 18.0 | 450 | 0.9703 | 0.6912 | 0.6706 |
| 0.6564 | 19.0 | 475 | 0.9611 | 0.6912 | 0.6710 |
| 0.8001 | 20.0 | 500 | 0.9562 | 0.6985 | 0.6769 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
|
kanishka/smolm-autoreg-bpe-counterfactual_babylm_aann_high_variability_adj-1e-3 | kanishka | "2024-07-09T20:28:17" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual_babylm_aann_high_variability_adj",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-08T21:51:35" | ---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_aann_high_variability_adj
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual_babylm_aann_high_variability_adj-1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual_babylm_aann_high_variability_adj
type: kanishka/counterfactual_babylm_aann_high_variability_adj
metrics:
- name: Accuracy
type: accuracy
value: 0.40953671849382034
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual_babylm_aann_high_variability_adj-1e-3
This model was trained from scratch on the kanishka/counterfactual_babylm_aann_high_variability_adj dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4138
- Accuracy: 0.4095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5997 | 1.0 | 18594 | 3.7835 | 0.3590 |
| 3.3824 | 2.0 | 37188 | 3.5906 | 0.3796 |
| 3.2596 | 3.0 | 55782 | 3.4868 | 0.3927 |
| 3.1824 | 4.0 | 74376 | 3.4542 | 0.3968 |
| 3.1206 | 5.0 | 92970 | 3.3991 | 0.4011 |
| 3.0864 | 6.0 | 111564 | 3.3910 | 0.4044 |
| 3.0439 | 7.0 | 130158 | 3.3760 | 0.4060 |
| 3.0083 | 8.0 | 148752 | 3.3728 | 0.4063 |
| 2.9832 | 9.0 | 167346 | 3.3599 | 0.4079 |
| 2.9563 | 10.0 | 185940 | 3.3528 | 0.4086 |
| 2.9333 | 11.0 | 204534 | 3.3603 | 0.4092 |
| 2.9142 | 12.0 | 223128 | 3.3724 | 0.4091 |
| 2.8926 | 13.0 | 241722 | 3.3841 | 0.4086 |
| 2.8681 | 14.0 | 260316 | 3.3805 | 0.4093 |
| 2.8499 | 15.0 | 278910 | 3.3840 | 0.4094 |
| 2.8344 | 16.0 | 297504 | 3.4004 | 0.4092 |
| 2.8144 | 17.0 | 316098 | 3.3943 | 0.4096 |
| 2.7909 | 18.0 | 334692 | 3.4081 | 0.4094 |
| 2.7754 | 19.0 | 353286 | 3.4054 | 0.4096 |
| 2.7621 | 20.0 | 371880 | 3.4138 | 0.4095 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
Utkarsh03/hb_111 | Utkarsh03 | "2024-04-06T10:57:34" | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:samhitmantrala/hb_2",
"base_model:finetune:samhitmantrala/hb_2",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-06T10:50:39" | ---
license: afl-3.0
base_model: samhitmantrala/hb_2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hb_111
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hb_111
This model is a fine-tuned version of [samhitmantrala/hb_2](https://huggingface.co/samhitmantrala/hb_2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0127
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 41 | 0.0192 | 1.0 |
| No log | 2.0 | 82 | 0.0141 | 1.0 |
| No log | 3.0 | 123 | 0.0127 | 1.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
hpekkan/kd_type_classifier_weighted_all_data | hpekkan | "2024-11-02T11:53:46" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-02T11:53:15" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF | mradermacher | "2024-11-07T11:41:52" | 6 | 0 | transformers | [
"transformers",
"gguf",
"Llama",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"en",
"base_model:NousResearch/Nous-Hermes-2-Llama-2-70B",
"base_model:quantized:NousResearch/Nous-Hermes-2-Llama-2-70B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-06T10:11:07" | ---
base_model: NousResearch/Nous-Hermes-2-Llama-2-70B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Llama
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NousResearch/Nous-Hermes-2-Llama-2-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Nous-Hermes-2-Llama-2-70B-GGUF/resolve/main/Nous-Hermes-2-Llama-2-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
sammyj4148/cu-go-bart-large-gc | sammyj4148 | "2023-11-15T22:10:30" | 96 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-15T21:57:34" | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: cu-go-bart-large-gc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cu-go-bart-large-gc
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3380
- Rouge1: 56.6424
- Rouge2: 31.6294
- Rougel: 38.8938
- Rougelsum: 51.9078
- Gen Len: 119.4535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 86 | 1.3532 | 54.8564 | 29.5263 | 36.6465 | 50.2558 | 116.6512 |
| No log | 2.0 | 172 | 1.3118 | 56.6239 | 31.6121 | 39.2945 | 51.7651 | 117.9419 |
| No log | 3.0 | 258 | 1.3380 | 56.6424 | 31.6294 | 38.8938 | 51.9078 | 119.4535 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.14.1
|
tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF | tensorblock | "2025-01-09T18:42:29" | 210 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:oopsung/llama2-7b-ko-Orcapus-test-v1",
"base_model:quantized:oopsung/llama2-7b-ko-Orcapus-test-v1",
"endpoints_compatible",
"region:us"
] | null | "2025-01-09T18:11:13" | ---
base_model: oopsung/llama2-7b-ko-Orcapus-test-v1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## oopsung/llama2-7b-ko-Orcapus-test-v1 - GGUF
This repo contains GGUF format model files for [oopsung/llama2-7b-ko-Orcapus-test-v1](https://huggingface.co/oopsung/llama2-7b-ko-Orcapus-test-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama2-7b-ko-Orcapus-test-v1-Q2_K.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q2_K.gguf) | Q2_K | 2.601 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2-7b-ko-Orcapus-test-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q3_K_S.gguf) | Q3_K_S | 3.022 GB | very small, high quality loss |
| [llama2-7b-ko-Orcapus-test-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q3_K_M.gguf) | Q3_K_M | 3.372 GB | very small, high quality loss |
| [llama2-7b-ko-Orcapus-test-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q3_K_L.gguf) | Q3_K_L | 3.671 GB | small, substantial quality loss |
| [llama2-7b-ko-Orcapus-test-v1-Q4_0.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q4_0.gguf) | Q4_0 | 3.907 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2-7b-ko-Orcapus-test-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q4_K_S.gguf) | Q4_K_S | 3.938 GB | small, greater quality loss |
| [llama2-7b-ko-Orcapus-test-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q4_K_M.gguf) | Q4_K_M | 4.163 GB | medium, balanced quality - recommended |
| [llama2-7b-ko-Orcapus-test-v1-Q5_0.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q5_0.gguf) | Q5_0 | 4.741 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2-7b-ko-Orcapus-test-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q5_K_S.gguf) | Q5_K_S | 4.741 GB | large, low quality loss - recommended |
| [llama2-7b-ko-Orcapus-test-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q5_K_M.gguf) | Q5_K_M | 4.872 GB | large, very low quality loss - recommended |
| [llama2-7b-ko-Orcapus-test-v1-Q6_K.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q6_K.gguf) | Q6_K | 5.626 GB | very large, extremely low quality loss |
| [llama2-7b-ko-Orcapus-test-v1-Q8_0.gguf](https://huggingface.co/tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF/blob/main/llama2-7b-ko-Orcapus-test-v1-Q8_0.gguf) | Q8_0 | 7.286 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF --include "llama2-7b-ko-Orcapus-test-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/llama2-7b-ko-Orcapus-test-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 998