modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-12 06:28:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 517
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-12 06:24:43
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MomoGrech/XLM-R-Large_Kurdish_Sorani_Text_Classification | MomoGrech | 2025-05-22T15:45:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-22T15:06:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
piotr-ai/polanka_4b_v0.2_qwen3_gguf | piotr-ai | 2025-05-22T15:44:54Z | 6 | 0 | null | [
"gguf",
"text-generation",
"pl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-20T18:47:53Z | ---
license: apache-2.0
language:
- pl
- en
pipeline_tag: text-generation
--- |
kandasani/Telugu_sentimental_analysis | kandasani | 2025-05-22T15:44:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-22T15:43:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TSheylock/SASOK_V1 | TSheylock | 2025-05-22T15:34:29Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-22T15:33:13Z | # SASOK Model
This package contains the SASOK cognitive model architecture implemented in PyTorch, ready for use with HuggingFace Transformers.
## Structure
- `model.py` — SASOK transformer with BatchNorm + LayerNorm
- `tokenizer.py` — HuggingFace-compatible tokenizer
- `train.py` — Training script using `Trainer`
- `cli.py` — CLI for generation
## Usage
### Train
```bash
python train.py
```
### CLI Inference
```bash
python cli.py "Hello, who are you?"
```
### Push to HuggingFace Hub
```bash
from transformers import AutoModel, AutoTokenizer
model.push_to_hub("sasok-model")
tokenizer.push_to_hub("sasok-model")
``` |
Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF | Triangle104 | 2025-05-22T15:29:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:ArliAI/QwQ-32B-ArliAI-RpR-v4",
"base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-22T15:27:24Z | ---
license: apache-2.0
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg
language:
- en
base_model: ArliAI/QwQ-32B-ArliAI-RpR-v4
library_name: transformers
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF
This model was converted to GGUF format from [`ArliAI/QwQ-32B-ArliAI-RpR-v4`](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4) for more details on the model.
---
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series.
RpR models use the same curated, deduplicated RP and creative writing dataset used for RPMax, with a focus on variety to ensure high creativity and minimize cross-context repetition. Users familiar with RPMax will recognize the unique, non-repetitive writing style unlike other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source reasoning model that can be easily trained, it was clear that the available instruct and creative writing reasoning datasets contains only one response per example. This is type of single response dataset used for training reasoning models causes degraded output quality in long multi-turn chats. Which is why Arli AI decided to create a real RP model capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning RP dataset by re-processing our existing known-good RPMax dataset into a reasoning dataset. This was possible by using the base QwQ Instruct model itself to create the reasoning process for every turn in the RPMax dataset conversation examples, which is then further refined in order to make sure the reasoning is in-line with the actual response examples from the dataset.
Another important thing to get right is to make sure the model is trained on examples that present reasoning blocks in the same way as it encounters it during inference. Which is, never seeing the reasoning blocks in it's context. In order to do this, the training run was completed using axolotl with manual template-free segments dataset in order to make sure that the model is never trained to see the reasoning block in the context. Just like how the model will be used during inference time.
The result of training QwQ on this dataset with this method are consistently coherent and interesting outputs even in long multi-turn RP chats. This is as far as we know the first true correctly-trained reasoning model trained for RP and creative writing.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF --hf-file qwq-32b-arliai-rpr-v4-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF --hf-file qwq-32b-arliai-rpr-v4-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF --hf-file qwq-32b-arliai-rpr-v4-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF --hf-file qwq-32b-arliai-rpr-v4-q3_k_l.gguf -c 2048
```
|
FormlessAI/7c1ab9da-9cb6-47e4-97e4-505eb72dc9ac | FormlessAI | 2025-05-22T15:23:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"unsloth",
"arxiv:2305.18290",
"base_model:unsloth/gemma-1.1-2b-it",
"base_model:finetune:unsloth/gemma-1.1-2b-it",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T14:37:57Z | ---
base_model: unsloth/gemma-1.1-2b-it
library_name: transformers
model_name: 7c1ab9da-9cb6-47e4-97e4-505eb72dc9ac
tags:
- generated_from_trainer
- trl
- dpo
- unsloth
licence: license
---
# Model Card for 7c1ab9da-9cb6-47e4-97e4-505eb72dc9ac
This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it](https://huggingface.co/unsloth/gemma-1.1-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/7c1ab9da-9cb6-47e4-97e4-505eb72dc9ac", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/g68up57s)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0+cu118
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
coco101010/Qwen3-32B-GPTQ-4bit-custom-calibration | coco101010 | 2025-05-22T15:17:22Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
]
| null | 2025-05-22T15:07:44Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-32B
---
This model is created with the following code:
```Python
from datasets import load_dataset
from gptqmodel import GPTQModel, QuantizeConfig
from huggingface_hub import constants
model_id = "Qwen/Qwen3-32B"
# Save the quantized model in the HF cache directory
cache_dir = constants.HF_HUB_CACHE
quant_path = os.path.join(cache_dir, "models--quantized--" + model_id.replace("/", "--") + "custom--calibration")
os.makedirs(quant_path, exist_ok=True)
# Load calibration data
calibration_dataset = []
with open("./data/custom_calibration_dataset.jsonl", "r") as f:
for line in f:
if line.strip(): # Skip empty lines
item = json.loads(line)
calibration_dataset.append(item["text"])
# Configure and run quantization
quant_config = QuantizeConfig(bits=4, group_size=128)
model = GPTQModel.load(model_id, quant_config)
model.quantize(calibration_dataset, batch_size=2)
model.save(quant_path)
``` |
Amala3/IronyDetection_Llama_5-grained_EN | Amala3 | 2025-05-22T15:17:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T15:16:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coco101010/Qwen3-32B-GPTQ-4bit-default-calibration | coco101010 | 2025-05-22T15:14:51Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
]
| null | 2025-05-22T14:55:21Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-32B
---
This model is created with the following code:
```Python
from datasets import load_dataset
from gptqmodel import GPTQModel, QuantizeConfig
from huggingface_hub import constants
model_id = "Qwen/Qwen3-32B"
# Save the quantized model in the HF cache directory
cache_dir = constants.HF_HUB_CACHE
quant_path = os.path.join(cache_dir, "models--quantized--" + model_id.replace("/", "--"))
os.makedirs(quant_path, exist_ok=True)
# Load calibration data (1024 samples from C4)
calibration_dataset = load_dataset(
"allenai/c4",
data_files="en/c4-train.00001-of-01024.json.gz",
split="train"
).select(range(1024))["text"]
# Configure and run quantization
quant_config = QuantizeConfig(bits=4, group_size=128)
model = GPTQModel.load(model_id, quant_config)
model.quantize(calibration_dataset, batch_size=2)
model.save(quant_path)
``` |
RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf | RichardErkhov | 2025-05-22T15:14:12Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T10:35:48Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rlhflow_mixture_iter2 - GGUF
- Model creator: https://huggingface.co/pxyyy/
- Original model: https://huggingface.co/pxyyy/rlhflow_mixture_iter2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [rlhflow_mixture_iter2.Q2_K.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q2_K.gguf) | Q2_K | 2.96GB |
| [rlhflow_mixture_iter2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [rlhflow_mixture_iter2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [rlhflow_mixture_iter2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [rlhflow_mixture_iter2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [rlhflow_mixture_iter2.Q3_K.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q3_K.gguf) | Q3_K | 3.74GB |
| [rlhflow_mixture_iter2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [rlhflow_mixture_iter2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [rlhflow_mixture_iter2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [rlhflow_mixture_iter2.Q4_0.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q4_0.gguf) | Q4_0 | 4.34GB |
| [rlhflow_mixture_iter2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [rlhflow_mixture_iter2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [rlhflow_mixture_iter2.Q4_K.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q4_K.gguf) | Q4_K | 4.58GB |
| [rlhflow_mixture_iter2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [rlhflow_mixture_iter2.Q4_1.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q4_1.gguf) | Q4_1 | 4.78GB |
| [rlhflow_mixture_iter2.Q5_0.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q5_0.gguf) | Q5_0 | 5.21GB |
| [rlhflow_mixture_iter2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [rlhflow_mixture_iter2.Q5_K.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q5_K.gguf) | Q5_K | 5.34GB |
| [rlhflow_mixture_iter2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [rlhflow_mixture_iter2.Q5_1.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q5_1.gguf) | Q5_1 | 5.65GB |
| [rlhflow_mixture_iter2.Q6_K.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q6_K.gguf) | Q6_K | 6.14GB |
| [rlhflow_mixture_iter2.Q8_0.gguf](https://huggingface.co/RichardErkhov/pxyyy_-_rlhflow_mixture_iter2-gguf/blob/main/rlhflow_mixture_iter2.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winssu/ppo-SnowballTarget | winssu | 2025-05-22T15:13:57Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2025-05-22T15:13:48Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: winssu/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Climate-TwitterBERT/Climate-TwitterBERT-step2 | Climate-TwitterBERT | 2025-05-22T15:11:41Z | 14 | 0 | null | [
"pytorch",
"bert",
"Twitter",
"Climate Change",
"en",
"license:mit",
"region:us"
]
| null | 2025-05-04T03:51:12Z | ---
language:
- en
tags:
- Twitter
- Climate Change
license: mit
---
# Model Card Climate-TwitterBERT-step-2
## Overview:
Using Climate-TwitterBERT-step-1 (https://huggingface.co/Climate-TwitterBERT/Climate-TwitterBERT-step1) as the starting model, we fine-tuned on the downstream task to classify whether a given climate tweet belongs to hard/soft/promotion climate tweet.
The model provides a label and probability score, indicating whether a given tweet belongs to hard (label = 0), soft (label = 1), or promotion (label = 2).
## Performance metrics:
Based on the test set, the model achieves the following results:
• Loss: 0.2613
• F1-weighted: 0.8008
• F1: 0.7798
• Accuracy: 0.8050
• Precision: 0.8034
• Recall: 0. 0.8050
## Example usage:
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
task_name = 'text-classification'
model_name = 'Climate-TwitterBERT/ Climate-TwitterBERT-step2'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
pipe = pipeline(task=task_name, model=model, tokenizer=tokenizer)
tweet = "We are committed to significantly cutting our carbon emissions by 30% before 2030."
result = pipe(tweet)
# The 'result' variable will contain the classification output: 0 = hard climate tweet, 1= soft climate tweet, and 2 = promotion tweet.
```
## Citation:
```bibtex
@article{fzz2025climatetwitter,
title={Responding to Climate Change Crisis: Firms' Tradeoffs},
author={Fritsch, Felix and Zhang, Qi and Zheng, Xiang},
journal={Journal of Accounting Research},
year={2025},
doi={10.1111/1475-679X.12625}
}
```
Fritsch, F., Zhang, Q., & Zheng, X. (2025). Responding to Climate Change Crisis: Firms' Tradeoffs. Journal of Accounting Research. https://doi.org/10.1111/1475-679X.12625
## Framework versions
• Transformers 4.28.1
• Pytorch 2.0.1+cu118
• Datasets 2.14.1
• Tokenizers 0.13.3
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_Adult_cfda_ep2_22 | MinaMila | 2025-05-22T15:11:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T05:21:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
linagora/Labess-7b-chat-gguf | linagora | 2025-05-22T15:07:58Z | 114 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"aeb",
"dataset:linagora/Tunisian_Derja_Dataset",
"base_model:inceptionai/jais-adapted-7b-chat",
"base_model:quantized:inceptionai/jais-adapted-7b-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-19T14:53:45Z | ---
base_model: inceptionai/jais-adapted-7b-chat
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- aeb
datasets:
- linagora/Tunisian_Derja_Dataset
---
## Model Overview
Labess-7b-chat is an open model instruction-tuned for Tunisian Derja, it's a continual pre-training version of jais-adapted-7b-chat with tunisian_Derja_Dataset
- **Developed by:** Linagora
- **License:** apache-2.0
- **Finetuned from model :** inceptionai/jais-adapted-7b-chat
## Usage
```sh
ollama run hf.co/linagora/Labess-7b-chat-gguf:Q4_K_M
``` |
bruhzair/group1-g | bruhzair | 2025-05-22T15:02:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T14:38:02Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# group1-g
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Sao10K--L3.1-70B-Hanami-x1/snapshots/f054d970fe9119d0237ce97029e6f5b9fce630eb
* /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7
- model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335
- model: /workspace/cache/models--Sao10K--L3.1-70B-Hanami-x1/snapshots/f054d970fe9119d0237ce97029e6f5b9fce630eb
base_model: /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7
merge_method: model_stock
tokenizer:
source: union
int8_mask: true
dtype: bfloat16
```
|
Wing12angelic/Mistralv2-1 | Wing12angelic | 2025-05-22T14:56:57Z | 77 | 0 | null | [
"pytorch",
"mistral",
"unsloth",
"trl",
"sft",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:mit",
"region:us"
]
| null | 2025-05-17T04:58:06Z | ---
license: mit
tags:
- unsloth
- trl
- sft
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
--- |
New-Sophie-Rain-Spider-Man-Video-Free02/Sophie.Rain.Spiderman.official.Video.Tutorial.link | New-Sophie-Rain-Spider-Man-Video-Free02 | 2025-05-22T14:56:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-22T14:56:10Z | 18 seconds ago
<a rel="nofollow" href="https://iccnews.xyz/leaked?cc">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?cc">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a href="https://iccnews.xyz/leaked?cc" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a>
— Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
Varinder2110/rss | Varinder2110 | 2025-05-22T14:52:52Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-22T13:28:27Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Rss
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/rss/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/rss', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/rss/discussions) to add images that show off what you’ve made with this LoRA.
|
alanmedeirossp/CloneAllan | alanmedeirossp | 2025-05-22T14:40:54Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-05-19T16:25:58Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
AhmedZaky1/arabic-bert-nli-matryoshka | AhmedZaky1 | 2025-05-22T14:35:16Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"matryoshka",
"arabic",
"natural-language-inference",
"nli",
"arabert",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Pair-Class",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-22T13:44:06Z | ---
language:
- ar
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- matryoshka
- arabic
- natural-language-inference
- bert
- nli
- arabert
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Pair-Class
metrics:
- cosine_accuracy
- cosine_f1
- accuracy
- f1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
base_model: aubmindlab/bert-base-arabertv02
license: apache-2.0
model-index:
- name: Arabic BERT NLI Matryoshka
results:
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
type: Omartificial-Intelligence-Space/Arabic-NLi-Pair-Class
name: Arabic NLI Pair Classification
metrics:
- type: accuracy
value: 0.8125
name: Best Accuracy (128 dim)
- type: f1
value: 0.8142
name: Best F1 (256 dim)
---
# Arabic BERT NLI Matryoshka Embeddings
## Model Description
This model is a **Matryoshka representation learning** version of AraBERT specifically fine-tuned for Arabic Natural Language Inference (NLI) tasks. It generates embeddings that can be truncated to different dimensions (768, 512, 256, 128, 64) while maintaining strong performance across all sizes.
The model is based on `aubmindlab/bert-base-arabertv02` and trained using the Matryoshka Representation Learning approach, which allows for flexible embedding dimensions without retraining.
## Key Features
- 🔄 **Flexible Dimensions**: Single model supports embeddings of size 768, 512, 256, 128, and 64
- 🚀 **High Performance**: Consistently outperforms base model across all dimensions
- 📊 **Arabic NLI Optimized**: Specifically trained on Arabic Natural Language Inference tasks
- ⚡ **Efficient**: Smaller dimensions offer faster inference with minimal performance loss
- 🎯 **Binary Classification**: Optimized for entailment vs contradiction classification
## Performance Results
Our model shows significant improvements over the base AraBERT model across all embedding dimensions:
| Dimension | Matryoshka Accuracy | Base Accuracy | Matryoshka F1 | Base F1 | Improvement |
|-----------|---------------------|---------------|---------------|---------|-------------|
| 768 | 80.3% | 56.8% | 81.15% | 41.94% | +39.21% |
| 512 | 80.6% | 56.9% | 81.36% | 44.32% | +37.05% |
| 256 | 80.95% | 55.65% | 81.42% | 38.7% | +42.72% |
| 128 | 81.25% | 56.7% | 81.37% | 40.6% | +40.77% |
| 64 | 81.0% | 55.8% | 80.51% | 37.92% | +42.59% |
## Quick Start
### Installation
```bash
pip install sentence-transformers torch
```
### Basic Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('AhmedZaky1/arabic-bert-nli-matryoshka')
# Example sentences
sentences = [
"الطقس جميل اليوم",
"إنه يوم مشمس وجميل",
"أحب قراءة الكتب"
]
# Generate embeddings (default: full 768 dimensions)
embeddings = model.encode(sentences)
print(f"Full embeddings shape: {embeddings.shape}")
# Use different dimensions by truncating
embeddings_256 = embeddings[:, :256] # Use first 256 dimensions
embeddings_128 = embeddings[:, :128] # Use first 128 dimensions
embeddings_64 = embeddings[:, :64] # Use first 64 dimensions
print(f"256-dim embeddings shape: {embeddings_256.shape}")
```
### Similarity Computation
```python
from sentence_transformers import util
# Compute similarity between sentences
sentence1 = "القطة تجلس على السجادة"
sentence2 = "الكلب يلعب في الحديقة"
embeddings = model.encode([sentence1, sentence2])
similarity = util.cos_sim(embeddings[0], embeddings[1])
print(f"Similarity: {similarity.item():.4f}")
```
### NLI Classification
```python
def classify_nli_pair(premise, hypothesis, threshold=0.6):
"""
Classify Natural Language Inference relationship
Args:
premise: The premise sentence
hypothesis: The hypothesis sentence
threshold: Similarity threshold for classification
Returns:
str: 'entailment' if similarity > threshold, else 'contradiction'
"""
embeddings = model.encode([premise, hypothesis])
similarity = util.cos_sim(embeddings[0], embeddings[1]).item()
return 'entailment' if similarity > threshold else 'contradiction'
# Example usage
premise = "الرجل يقرأ كتاباً في المكتبة"
hypothesis = "شخص يقرأ في مكان هادئ"
result = classify_nli_pair(premise, hypothesis)
print(f"Relationship: {result}")
```
### Choosing the Right Dimension
- **768 dimensions**: Maximum accuracy for critical applications
- **512 dimensions**: Best balance of performance and efficiency
- **256 dimensions**: Good performance with 3x faster inference
- **128 dimensions**: Suitable for real-time applications
- **64 dimensions**: Ultra-fast inference for large-scale processing
## Training Details
### Dataset
- **Training Data**: Arabic-NLI-Pair-Class dataset from Omartificial-Intelligence-Space
- **Language**: Modern Standard Arabic (MSA)
- **Task Type**: Binary classification (entailment vs contradiction)
### Training Configuration
- **Base Model**: aubmindlab/bert-base-arabertv02
- **Max Sequence Length**: 75 tokens
- **Batch Size**: 64
- **Epochs**: 5
- **Matryoshka Dimensions**: [768, 512, 256, 128, 64]
- **Loss Function**: MatryoshkaLoss with CosineSimilarityLoss
- **Optimization**: AdamW with automatic mixed precision (AMP)
## Use Cases
1. **Arabic Text Similarity**: Measure semantic similarity between Arabic texts
2. **Natural Language Inference**: Determine logical relationships between Arabic sentences
3. **Information Retrieval**: Find relevant Arabic documents based on queries
4. **Semantic Search**: Build Arabic search engines with semantic understanding
5. **Text Classification**: Use embeddings as features for downstream Arabic NLP tasks
## Citation
If you use this model in your research, please cite:
```bibtex
@model{arabic-bert-nli-matryoshka,
title={Arabic BERT NLI Matryoshka Embeddings},
author={Ahmed Mouad},
year={2025},
url={https://huggingface.co/AhmedZaky1/arabic-bert-nli-matryoshka}
}
```
## Acknowledgments
- **AraBERT Team**: For the excellent base model (aubmindlab/bert-base-arabertv02)
- **Sentence Transformers**: For the robust training framework
- **Matryoshka Representation Learning**: For the innovative approach to nested embeddings
- **Arabic NLI Dataset**: Omartificial-Intelligence-Space for the training data
## License
This model is released under the Apache 2.0 License.
---
**Model Version**: 1.0
**Last Updated**: May 2025
**Framework**: sentence-transformers
**Language**: Arabic (العربية)
|
PKU-Alignment/TruthfulJudge | PKU-Alignment | 2025-05-22T14:32:53Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
]
| image-text-to-text | 2025-05-22T13:34:23Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
pipeline_tag: image-text-to-text
---
# TruthfulJudge
TruthfulJudge is a reliable evaluation pipeline designed to mitigate the pitfalls of AI-as-judge setups. Our methodology emphasizes in-depth human involvement to prevent feedback loops of hallucinated errors, ensuring faithful assessment of multimodal model truthfulness. Our specialized judge model, TruthfulJudge, is well-calibrated (ECE=0.11), self-consistent, and highly inter-annotator agreed (Cohen's κ = 0.79), achieving 88.4% judge accuracy.
This model is a pairwise critique-label judge trained to judge the preference of two responses to TruthfulVQA dataset open-ended questions.
## Dependencies
```bash
pip install vllm transformers torch pillow
```
## Usage
Here's a simple example of how to use TruthfulJudge:
```python
from vllm import LLM, SamplingParams
from transformers import AutoProcessor
from PIL import Image
import torch
def create_prompt(image: Image.Image, question: str, response_A: str, response_B: str, system_prompt: str, processor: AutoProcessor = None) -> str:
"""Create a prompt using the template format."""
prompt = [
{'role': 'system', 'content': [{'type': 'text', 'text': system_prompt}]},
{'role': 'user', 'content': [
{'type': 'image'},
{'type': 'text', 'text': f'[[Question]]\n{question}\n[[Response A]]\n{response_A}\n[[Response B]]\n{response_B}'},
]}
]
return processor.apply_chat_template(prompt, add_generation_prompt=True)
# Load model and processor
model_name = "PKU-Alignment/TruthfulJudge"
# Initialize model
sampling_params = SamplingParams(
temperature=0.1,
top_p=0.95,
max_tokens=2048
)
# Set parallel size based on available GPUs
parallel_size = 4
llm = LLM(
model=model_name,
tokenizer=model_name,
tensor_parallel_size=parallel_size,
gpu_memory_utilization=0.8,
limit_mm_per_prompt={"image": 1, "audio": 0, "video": 0},
trust_remote_code=True,
)
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
# Load and prepare image
image = Image.open("path_to_your_image.jpg")
image = image.convert("RGB")
# Example inputs
question = "What is shown in this image?"
response_A = "This is a beautiful landscape with mountains and a lake."
response_B = "This is a city street with tall buildings and cars."
# System prompt for judging
system_prompt = """
You are an expert in visual question answering. You need to critique and judge the two responses. Given an image, a question, two responses, you should output a critique and a label to indicate which response is better. You should also output a confidence score (a fractional number between 0 and 1) to indicate how sure you are about your judgement.
# Output Format
<critique>...</critique>
<label>...</label>
<confidence>...</confidence>
"""
# Create prompt
prompt = create_prompt(image, question, response_A, response_B, system_prompt, processor)
# Prepare inputs
vllm_input = [
{
"prompt": prompt,
"multi_modal_data": {"image": image}
}
]
# Generate response
outputs = llm.generate(prompts=vllm_input, sampling_params=sampling_params)
result = outputs[0].outputs[0].text
# print result
print("Model output:")
print(result)
```
## Output Format
The model outputs a structured response with three components:
- `<critique>`: A detailed analysis of the responses
- `<label>`: Either 'A' or 'B' indicating which response is better
- `<confidence>`: A score between 0 and 1 indicating the confidence in the judgment
Example output:
```
<critique>Response A provides a more accurate description of the image, correctly identifying the landscape elements. Response B incorrectly describes urban elements that are not present in the image.</critique>
<label>A</label>
<confidence>0.95</confidence>
```
|
vermoney/fc0156b1-656d-4297-a878-c8b83684bea7 | vermoney | 2025-05-22T14:22:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-32k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-32k",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-22T13:57:31Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-32k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fc0156b1-656d-4297-a878-c8b83684bea7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-32k
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8b7bf849706ddb22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: context
field_instruction: question
field_output: long_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/fc0156b1-656d-4297-a878-c8b83684bea7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/8b7bf849706ddb22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4e5765d8-a61e-4a69-91e5-abb95b3c7b6d
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 4e5765d8-a61e-4a69-91e5-abb95b3c7b6d
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# fc0156b1-656d-4297-a878-c8b83684bea7
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.488 | 0.0168 | 280 | 1.3092 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jasonhuang3/jason_8b_2k_early_1 | jasonhuang3 | 2025-05-22T14:21:52Z | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
]
| null | 2025-05-22T14:14:57Z | by yentinglin/Llama-3-Taiwan-8B-Instruct |
quentinbch/q-FrozenLake-v1-4x4-noSlippery | quentinbch | 2025-05-22T14:19:53Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-22T14:19:49Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="quentinbch/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
videos-nimra-mehra-video-jobz-hunting-link/nimra.mehra.jobz.hunting.video.nimra.mehra.video.nimra.mehra.original.link | videos-nimra-mehra-video-jobz-hunting-link | 2025-05-22T14:19:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-22T14:19:17Z | <a rel="nofollow" href="https://iccnews.xyz/leaked?cc">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?cc">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a href="https://iccnews.xyz/leaked?cc" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a>
|
kristaller486/wikisource_preferences_ru-4b-03T | kristaller486 | 2025-05-22T14:18:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-22T14:13:36Z | ---
base_model: unsloth/qwen3-4b-base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** kristaller486
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_Adult_cfda_ep1_22 | MinaMila | 2025-05-22T14:10:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T02:41:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-Molly-Mae-Viral-Video/wATCH.Molly.Mae.Viral.Video.Original.Link | New-tutorial-Molly-Mae-Viral-Video | 2025-05-22T14:09:08Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-22T14:06:44Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Watch the awkward moment fans fail to acknowledge Molly-Mae Hague - as viral clip leaves fans 'cringing'
The awkward moment, which was shown on her Prime Video documentary Behind It All, has circulated
Luke Shaw makes appearance in viral TikTok video with Molly-Mae Hague and it's left fans stunned
Shaw and his partner spent New Year's with Molly-Mae Hague and friends. ... Manchester United
Tommy Fury explains that viral running video as Molly-Mae prepares for split tell-all
Tommy Fury was roundly mocked when a video of him dashing for the finish line in a charity race |
EmreGed/sunergy8bit5e | EmreGed | 2025-05-22T13:57:32Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-22T13:55:45Z | ---
license: apache-2.0
---
|
MinaMila/gemma2_2b_LoRa_ACSEmployment_2_ep8_22 | MinaMila | 2025-05-22T13:51:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T13:51:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kokovova/ded5a618-a80f-4250-8fb8-566e03630711 | kokovova | 2025-05-22T13:46:58Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/gemma-1.1-2b-it",
"base_model:quantized:unsloth/gemma-1.1-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-22T13:41:00Z | ---
base_model: unsloth/gemma-1.1-2b-it
library_name: transformers
model_name: ded5a618-a80f-4250-8fb8-566e03630711
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for ded5a618-a80f-4250-8fb8-566e03630711
This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it](https://huggingface.co/unsloth/gemma-1.1-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kokovova/ded5a618-a80f-4250-8fb8-566e03630711", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/29e0mkpx)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hfendpoints-images/embeddings-sentence-transformers-cpu | hfendpoints-images | 2025-05-22T13:45:51Z | 0 | 0 | null | [
"hfendpoints",
"embedding",
"base_model:Alibaba-NLP/gte-modernbert-base",
"base_model:finetune:Alibaba-NLP/gte-modernbert-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-16T21:08:03Z | ---
license: apache-2.0
base_model:
- Alibaba-NLP/gte-modernbert-base
tags:
- hfendpoints
- embedding
--- |
Elusive316/nod_test-donsu-llama-25-epoch-Q4_K_M-GGUF | Elusive316 | 2025-05-22T13:45:31Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:insidious316/nod_test-donsu-llama-25-epoch",
"base_model:quantized:insidious316/nod_test-donsu-llama-25-epoch",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-22T13:45:07Z | ---
base_model: insidious316/nod_test-donsu-llama-25-epoch
tags:
- llama-cpp
- gguf-my-repo
---
# Elusive316/nod_test-donsu-llama-25-epoch-Q4_K_M-GGUF
This model was converted to GGUF format from [`insidious316/nod_test-donsu-llama-25-epoch`](https://huggingface.co/insidious316/nod_test-donsu-llama-25-epoch) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/insidious316/nod_test-donsu-llama-25-epoch) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Elusive316/nod_test-donsu-llama-25-epoch-Q4_K_M-GGUF --hf-file nod_test-donsu-llama-25-epoch-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Elusive316/nod_test-donsu-llama-25-epoch-Q4_K_M-GGUF --hf-file nod_test-donsu-llama-25-epoch-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Elusive316/nod_test-donsu-llama-25-epoch-Q4_K_M-GGUF --hf-file nod_test-donsu-llama-25-epoch-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Elusive316/nod_test-donsu-llama-25-epoch-Q4_K_M-GGUF --hf-file nod_test-donsu-llama-25-epoch-q4_k_m.gguf -c 2048
```
|
hyu1/model1 | hyu1 | 2025-05-22T13:40:45Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-21T16:57:43Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hyu1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/nonosax-ACT-example_dataset-l22so | phospho-app | 2025-05-22T13:38:15Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
]
| null | 2025-05-22T12:42:09Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [nonosax/example_dataset](https://huggingface.co/datasets/nonosax/example_dataset)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 40
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
jayalakshmikopuri/deepfake-audio-detector_V2 | jayalakshmikopuri | 2025-05-22T13:37:52Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Heem2/Deepfake-audio-detection",
"base_model:finetune:Heem2/Deepfake-audio-detection",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2025-05-16T11:10:55Z | ---
library_name: transformers
license: apache-2.0
base_model: Heem2/Deepfake-audio-detection
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: deepfake-audio-detector_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepfake-audio-detector_V2
This model is a fine-tuned version of [Heem2/Deepfake-audio-detection](https://huggingface.co/Heem2/Deepfake-audio-detection) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0708
- Accuracy: 0.9961
- Precision: 0.9949
- Recall: 0.9974
- F1: 0.9961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4204 | 1.0 | 388 | 0.5132 | 0.9691 | 0.9483 | 0.9923 | 0.9698 |
| 0.5578 | 2.0 | 776 | 0.3286 | 0.9794 | 0.9697 | 0.9897 | 0.9796 |
| 0.2106 | 3.0 | 1164 | 0.1348 | 0.9923 | 0.9923 | 0.9923 | 0.9923 |
| 0.2262 | 4.0 | 1552 | 0.0624 | 0.9961 | 0.9923 | 1.0 | 0.9961 |
| 0.0 | 4.9884 | 1935 | 0.0708 | 0.9961 | 0.9949 | 0.9974 | 0.9961 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
jimmeylove/week6Mli | jimmeylove | 2025-05-22T13:35:40Z | 15 | 0 | peft | [
"peft",
"safetensors",
"llama-3.2",
"sarcasm",
"reddit",
"lora",
"en",
"dataset:custom",
"license:mit",
"region:us"
]
| null | 2025-05-21T18:30:10Z | ---
language: en
license: mit
tags:
- llama-3.2
- sarcasm
- reddit
- peft
- lora
datasets:
- custom
---
# Sarcastic Reddit AI - Fine-tuned Llama 3.2 1B Model
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) that has been trained to generate sarcastic Reddit-style responses. It was fine-tuned using LoRA (Low-Rank Adaptation) to maintain the base model's capabilities while specializing in sarcastic responses.
## Model Description
- **Base Model**: meta-llama/Llama-3.2-1B-Instruct
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Training Data**: Custom dataset of Reddit-style sarcastic responses
- **Special Capabilities**:
- Generates consistently sarcastic responses regardless of input format
- Works with both questions and statements
- Produces complete responses that finish naturally
## Intended Use
This model is intended for generating sarcastic responses in a Reddit style. It can be used for:
- Entertainment purposes
- Creative writing assistance
- Chatbot applications requiring a sarcastic personality
## Usage
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "jimmeylove/week6Mli"
base_model = "meta-llama/Llama-3.2-1B-Instruct"
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(base_model, model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model)
# Format prompt
prompt = "On Reddit, someone asked: How do birds fly?\n\nA sarcastic Redditor replied:"
# Generate response
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
input_ids=inputs.input_ids,
attention_mask=inputs.attention_mask,
max_new_tokens=1000,
temperature=1.5,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Limitations
- The model may occasionally generate non-sarcastic responses
- As with all language models, it may produce inappropriate content
- The model inherits biases from its training data and base model
## Training Details
The model was fine-tuned using the following parameters:
- LoRA rank: 8
- Target modules: q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj
- Training data: 3000 examples of sarcastic Reddit responses
|
bruhzair/group1-e | bruhzair | 2025-05-22T13:26:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-21T23:14:38Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# group1-e
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Daemontatox--Llama3.3-70B-CogniLink/snapshots/99ede7d64184a107a405eea01f0a3eb5dc9f669a
* /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
* /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7
- model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- model: /workspace/cache/models--Daemontatox--Llama3.3-70B-CogniLink/snapshots/99ede7d64184a107a405eea01f0a3eb5dc9f669a
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
base_model: /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7
merge_method: model_stock
tokenizer:
source: union
int8_mask: true
dtype: bfloat16
```
|
18-VIDEOS-Imsha-Rehman-Viral-Video/Orginal.Videos.Clip.Imsha.Rehman.Viral.Video.Leaks.Official | 18-VIDEOS-Imsha-Rehman-Viral-Video | 2025-05-22T13:24:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-22T13:23:23Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Pakistani Tiktok Star Imsha Rehman Video Link Viral Watch Full Video Clips , Imsha Rehman Full Viral Video Download Link.. Actor - a young and talented digital creator, recently became famous thanks to this interesting video Leaked Online Video Actor - Original Tending Video Clips ,
Watch Imsha Rehman Full Video Clips , Tiktok Star Imsha Rehman Viral Video Download Link,, Click Here To link |
lewiswatson/nanoVLM | lewiswatson | 2025-05-22T13:16:49Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
]
| image-text-to-text | 2025-05-22T13:16:20Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("lewiswatson/nanoVLM")
```
|
LucidityAI/Kiwi-1-8B-Preview | LucidityAI | 2025-05-22T13:16:41Z | 4 | 0 | null | [
"safetensors",
"qwen3",
"en",
"region:us"
]
| null | 2025-05-11T15:04:03Z | ---
language:
- en
---
# Kiwi-8B Preview
Kiwi-8B is a hybrid reasoning model based of Qwen's 8B fine-tuned for better STEM performance.
Here are a few examples of Kiwi-8B Preview, Kiwi-1.7B nano and Kiwi-4B. These are all one-shot results on the same settings.
| Model | Generated GUI for "A Tailwind Dairy Shop" |
|---------|---------------------------------:|
| Kiwi-4B-Preview | <img style="height: 250px;" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAycAAAIeCAIAAAAuyecvAAAgAElEQVR4Ae3d/5Mc9X0nfv8l+xuV2ir94JSqqKwxB1bM8VGsDyDr8C3nszcxB7k40scxBxF8QtDZYOKycGyvHaKIyDYf+QTBjowV+8AhWd0HvA62kQFLBoOMdRKWAUkGhKTV7mp352qmZ3rf3TOzu9qZ6enpfmypotme7veXx6vDPP3unta7Kn4IECBAgAABAgR6L/Cu3nehBwIECBAgQIAAgYrU5SQgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhIHVloawPAgQIECBAgIDU5RwgQIAAAQIECGQhUOrU9csX5z/8789fOzL1x//h/JunFrLwTvZx6o2FO/7r9MbLpj6/beb8VPK95G+vvDT/p6PnP/jeqd07Zufmku/5jQABAgQIEBgEgRylrvNTlb/cPH3tyFTznx8/1ZOgsbrU9eOn5ppHOPaB85/7i5mfTM7Nz19E2eOmPvzvz//yxaWO3Lv7QtRpvwLiRczKrgQIEMiHwM8Ovfh777v2Z4dezMdwVj+Kwkxk9QRFOVLquui1rjgqNWeva0em7viv06feWOmy2ZunFv7fP1nRWtf//qW1rqL8/5x5EOieQMsP4zdOnLr6uo8MVtT42aEXf+d33zc0PBL9+Z3ffV9Xxt/Sp3v8PWnpiYmnhoZHnph4Kmx9ECeSqumHPvrxs+eWvKYTTri4r3Oaunq0uJWqY4drXX+5eTq6LHj6rYV/3H1h9Pfrq3TbPjF97sxKg1dqSH4lQIDAygVafhgPaOoKF6Xuf2B3V4JXS5+V8z4x8dTvrbvujROnVn5Ih3uePTf1oY9+/OrrPvLJ2z8dNtXhRMKm4tdvnDj1e+uuS8W7+N0OX3zy9k+HdGfPTX3y9k93ItlhLTo8vEON8HCpa/VrXXHqikCff2b+hvfXg9f3vnUhVPaaAAECvRBo+WFcgNQVhY/7H9jdIVpLn5W3mf1H9c8OvXj1dR+p9pu8MNrhRFpOuXep6/4Hdnd9ZavDWnR4eEvA1W0cpNT1pbtnoot6/7j7wv7H5v7TVefj3HPy9YW/+avZ//x/VSPU6O9Pfe7Omdd+XV9tWlioHHp2/lO3zERrUR9879QnPjId3UQVrnUdPDD/6VtmPvjeqQ++d+ov/nT6+NG2i1XxFca494h+YaGy8/Oz0QjvvnVmZqa6+ddHF/72c7N//B+qN8JfOzL1sWvP/3/3z8Y3zocDiG7nDxt/+efznxybjm75CrfHhy8x60qlcvL1hb+7b/Zj11ZNoq73PSwLru7/TRxFIKcCLT+Mm1PX/Q/sji/ehYson7z909Gf6N37H9gdxZ3413Dan7z90929/Bc23jyRaGCVSiX6vNz98N6h4ZH4s7zdjCqVSvzW7/zu+3Y/vDeOL09MPBUfXqlUUpkjnHh0gS+e79DwSLxsE27sxSrR/Q/sjgr0yds/HYbOyOeHPzrwoY9+PKpCWMdIKdoeKkUa4Z6xc3QdMz6ki3NJwYZVjl/HNRoaHgmHFxU9frcle7wx3m1oeCS0Cs+BqP2wavHh8WAyfjGQqeuvbp+JvnsY5Z7DL87/0f9dzxbxvVYf/YPzL/+8en/6D/fPXX9lfQkqfje6ghmHno2XTX3w8sQ+Wz7c9vaslgEoKtsP99dvtI/veY+TYtz1tSNTn/nzmZnp6hHxAOL948Zv+aPpW/6o+t2Cdqlr6VmfeG1h839Km3zp7loSzPgU0x0BAj0TiD9Ewx5SqSu81hNlizh8RJ9G0Sdu/DEc/xpe44szUPQBH74Vdr3q16mJhGtd0cBSn83xZ2dqRvc/sDt+K/r4j4e6ROqK9oy7eGLiqRghbi36LI/p4n1WPeXmA8O8klqbie6RiueSGnDqgmxY8TjGRd2FzmF3zYNZ9ZbUyJvbCYeXKl90QkYRKnorLEpYizdOnNr+pb+LJ/U7v/u+ODiG7ce7LTuq5nH2aEtOU1eYUeIlpeYE85ebp397cmHrzdV0cv2VU0/+89zcXOWF5+Y/+gfVwPH5bTNnz1S2faL+vcjvP3phfr5yfqryxL65nz9XDWRx6Ll2ZOobO2Znpiu//MX8x66pHrvxsqkfPdn6i5NxMIoHFtcmfitOUV//yuwT++ZOv1VdOXv7zYW7/p/qYP7juqkXkgOI949biAVapq5zZ5aa9dxc5TsP1b/z+Jk/rz6TYn6+8sLz89a64kp5QaAYAqkbluOli/jjOfyUjaYcbgmzVOpDLvo1+vyLLnvFN+WEb3WLMRxVpVIJPzifmHgqnk6lUkntGW5pjhHhzkukrnZXxFIf1SFXtyYethN2l5pLVOg4WMRLgG+cOJXaM7WM15fUFWfTcHbR67AizVs+efunw2NDkPB1qtnwhGxuP9p5icNTrfX614FMXR/9g/M/fXpuYaEaoZ790fymf1ddpvrcX8xED7KamancfWv1WuTNm84fP7oQP43i3q0zv3l1YSG4chinrps3nY++eLiwUPnr/16/jrl3d+vrcXEwWknqiut39szC8aMLcXB8/NvVxuMBNKeujZdN7XmgGgQvzFYuXKikOn3ux0vN+tQbC/GTJm7eVLW60Hoq8ei8IEBgIAVafsaEa12pqFGpVKKPqOjzO4wR4UdXvFuUuuJlsDjVNV/T6ZAvFR/bffRGaSN8Nx7qExNPpdJhGMiaDwzDSupyXjyX1Ed1asEp3q1bL8JyRNEzXulpLnS8pXnW0bFR7fqSusJFqRTOyk/IMFmmXkdtRhWMz8n4XE2dHtHOqVKmRpXlrzlNXS2/wxhHlt07ZmOjeFEnXhyKX0SrRI/uubDxssWrhx+75vz3vnUhSiHNoadSqcS9rCJ1Pf7t+grTbf9l+sw71Xz3wnPzt//xdHRTVzywa0emosabBxCnq7iFaKbx9ijqLTvrV16qL/hFnV5/5dSX753py5Ng40p5QYBA1wXij96w5V6krpafZGGnHb5uOZGozdTn5RIf2835I2w2deAqUlc8nqHhkXD5rcO5R4encmcUJuJewonE+0e3rDXPur+pq3mooU+qCmFoTgXNVNJKnQbRTV3R/3gI/wdDc/tR76nDwyFl/HogU1eYh+JFnTDTRK+j1LWwUPnJ5NwnPlJ9LFa8z99+rvqE9+bQ00nqunChsv0v6+tkOz8/u7BQCaPP2AfOb715+tYb65c7l01dqYW0VOpadtaVSuW1Xy987i9mwnvalrhZLePTTncECHRFoOUnXJi6mncIt4SLK+FHV/xZGF9hjO9J78qwmxsJR5V6N/V52bxnvKXlZbg4uDS3E98MtMIrjPHAUlbx9k5eNI8hyoXtShDvH8bHaADhlni36K3wcm24WycjTx0b4cSrdKl342LF28Mt4Qm5ROpK+Ye/hq3FXaSaCrdn/3rgU9eBH85FcSq+wtgO8Z23F77+ldlo5z/50Pm3frvQ3dQV37Z/w/unfnGwet/YN79eX/p6Yl/1emi4pcPUtfJZX7hQ+cG/zkU3um28bOrAD1vfrNYOzXYCBPIs0PIzJkxd0fpBfMUn9YkYfsiFH12p1BW9FS53bf/S38W3eXXFp+VEopZTaWnZGcWTjVJFnLqiX6MQE80ofhhp9FYcFOI75VOjimedsupcoF2DcWaKAmU8tejXaC7Rbf7xNCOfuFhhEk2BtOu08+lEHcVjiE6n+Hld4U170Rhi+fCETEWlVC3CPaN1r1gjbD++mz51eOdzXHULA5+6Tr6+8Kc3VO9/v/7Kqce/fSF6XsNbv114eNeF5348f36q8tXx2Reem79woXpH+Q/+de4/rquueH3iI9On3+pa6nrrtwsP/f1stKq08bKpvd+4EGWsB/66/iCJf/ha9Ub+I4fn/+RD9e8Vdpi6lp51pVL5/qMXntg3Fz2s9fjRhU98pLrGtunfTT3346X+3aFVn0YOJECgLwItP0tSqSv6GE7d/hKNNvzoSn0Gt/y1ZSNdmXjLiUQtN6euJWYUvhU9AjRcpYtvUItWucJnhEZBIZpgHG7i1qIt4QMI4s/4rkw/XIIKG4wzU+Sz++G98RP8UwMIH6MQh5ioqfitJUDC+/TDAXTyOuRK3QgYvhVOJDwhU6krVYuwXtu/uONDH/14qp34XI2nFnUaFreT2a362IFPXe2eDXHtyNSPn5pr+W87fvC9U/v+ofXN7Bd1hTG+Xhm/uP7KqX96pBqwop9nfzQfXuDbeNnU2Ae6k7qWnnWlUml5CfLu/7bMv7G96tPIgQQIECBAgMCyAkVIXZVK5div5j/z5/XnoG68bOqmD57/6vjs6bcWZmYq/2Pn7E0fPB9dWBz9/alP3TJz6Nn5aC2qwyuMcdj64Hun/nT0/Dd2zKZuV19YqDz1xFz0KIroLv5/+Fr9mmOHa11RXdvNulKp/OQHc58cm44yX/Rg2P/5jxeih4Qte07YgQABAgQIEOiFQI5SVy+mp00CBAgQIECAQE4EpK6cFMIwCBAgQIAAgYILSF0FL7DpESBAgAABAjkRkLpyUgjDIECAAAECBAouIHUVvMCmR4AAAQIECOREQOrKSSEMgwABAgQIECi4gNRV8AKbHgECBAgQIJATAakrJ4UwDAIECBAgQKDgAlJXwQtsegQIECBAgEBOBKSunBTCMAgQIECAAIGCC0hdBS+w6REgQIAAAQI5EZC6clIIwyBAgAABAgQKLiB1FbzApkeAAAECBAjkREDqykkhDIMAAQIECBAouIDUVfACmx4BAgQIECCQEwGpKyeFMAwCBAgQIECg4AJSV8ELbHoECBAgQIBATgSkrpwUwjAIECBAgACBggtIXQUvsOkRIECAAAECORGQunJSCMMgQIAAAQIECi4gdRW8wKZHgAABAgQI5ERA6spJIQyDAAECBAgQKLiA1FXwApseAQIECBAgkBMBqSsnhTAMAgQIECBAoOACUlfBC2x6BAgQIECAQE4EpK6cFMIwCBAgQIAAgYILSF0FL7DpESBAgAABAjkRkLpyUgjDIECAAAECBAouIHUVvMCmR4AAAQIECOREQOrKSSEMgwABAgQIECi4wLvOnj371ltvnTx58vXXXz9+/Pirr756rPZz1A8BAgQIECBAgEBnAlGsevXVV48fP/6uqampM2fOvP3226dOnTpx4sTrtZ/X/BAgQIAAAQIECHRDIApXJ06ceNf09HQUvE6fPv3222+/2fj5rR8CBAgQIECAAIHOBBrB6s233377XbOzszMzM1NTU+fOnTtb+znjhwABAgQIECBAoHsCUcR619zc3IULF6LsNV37Oe+HAAECBAgQIECgewJRxHrX/Px8FLyi7DXrhwABAgQIECBAoAcC71pYWJhv/Mz5IUCAAAECBAgQ6I1ANXXFP4305W8CBAgQIECAAIEuC9SfkhoHLy8IECBAgAABAgR6IeDZ9BU/BAgQIECAAIEMBKSuDJB1QYAAAQIECBCoSF1OAgIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECBAgQIBAFgJSVxbK+iBAgAABAgQISF3OAQIECCQEJraODA3Hf7ZNJN70CwECBFYvIHWt3s6RBAgUUkDqKmRZTYpAHgSkrjxUwRgIrEpgbubwk3vu3nLT2iuvitdmLrnyhtFbdzx88NT0qprs0UHJHBMvI9VeXLph7egdd++aPHyuR51fdLPJ0VrrumhABxAg0E5A6monYzuBPAvMHHv8vnXvTsaXxYtite1Xb3v46ExO5pDMMW2GvWbD2ENH8hAWk6PtT+qafufI5EPjox/YvPNoTmpoGAQIdEFA6uoCoiYIZCowd+rRWzfEi1tLvthw83dPZTq2Np0lc0yb1FVLjeu/frxNG9ltTo4269R14sn71r/nikZZx6Su7AqvJwK9F5C6em+sBwLdFDgzsXWFkSsKNxu2TJzpZv+raiuZYzaN3nXvbdU/d6wPro3Wc8aaOx7t96XG5GizTl3Hdo01ItfI0LDUtaoTzkEE8iogdeW1MsZFoJXAiUc2Bx/JtVx1zbadTx458c5MZW7m9K9femz8lrVrkotJl2577J1WbWW4bYkcc/rAjvXJAY99t88XRpcYbQZmUlcGyLog0C8Bqatf8volcPEC5ya3XJpIVGu27j8919TOK3tSOeay8Zeadsp0w9I55tAXNoVR8pJ7DmY6uKbOlh5t0+5d3iB1dRlUcwTyJCB15akaxkJgSYHp794RppOhq3ccao5ctRZOp/a89L4D9ZaP79wY5LaNe44lemz7bjIK1C66vTm5/cYbLqkuU11x22SileZflskxE9sS89paby5xVG2o0y/v2zJa+8Lmmpse/HXYz8yx6Ouc8R1Ra65a+/6bxsb3HXoz3C35eu7MgYfuHb2m8Q3QS28Y/ez+Y9OVRL/DwRXG5Di3pB7ktfS7tZ6nX5ncedfmdcF11Uuu3LR+657J1yqVShI/9d2I4ZF1u4I73qaPT+waHxvdtCb+RsWlG9Zes3nL+OSJ5BT9RoBArgSkrlyVw2AILCEw8+iWIDANjyx1JW7umTsTq2KbttfXj5If7atOXe9Mbrl8cTDp/NE0ibY5prZnKk3Ga12JozbuOZxYw1u84Wn66L6br1wcTCLAVbPLhtFdB083Dany2vdbH3X9nvFbw9a6lLreObj9DxvxrilR1QCTpWnaJ05dp58eb//11WC0zVO2hQCBfgtIXf2ugP4JrFTgYDJIbX745FJHPpaIDnFES360rzZ13fmpRIDoLHWdaZcmk6nrvjs/HIahRupKRLFwh8TrtXdNJoJXMjU2BbXw2CDHLL2atcS7y3V3EanrtX2jydvgkoMPRrvU2eE9AgT6IyB19cddrwQuXmByS7j+sebe1AWuVIPJa4Lx9amupK4wlFRfrzp1Tb/20oOpp2AE32FMpK5w7tXXtdQ1d/DOYMmtmj8uHRurfUFyy42Je8WGhjfc9mR8k/7MxNZEahxac9X6LdWvVY5dn9xe7SjIMUvkqkql0vbd4zuvT4ldsWb0jtq3OLeNfqDaYw1w5vDk/sce3//greHIN23ZXd342OP7J1+pjj95D9yG0fHJwydPnTh56sSvX5p4aMfYNfctfVakThK/EiCQsYDUlTG47gisVuDonnVh8kgvU6Wb7XHquuK68WdO1B7xMP3OkWPV25KW+lkyPyUSSfi8rvRRazbcvPfI6dpzVKdfO3LiXCX1jc7UdwtOPzeeEPvwt+r3PJ3cNxpKXr7tseDer9NPbFsTvtt56nr6vkvCBq9MP7329IEdDwY3xiUL11jSa+gmTfr/lI3GuPxNgMCKBKSuFTHZiUD/BVJZoa9rXZdsnbyoh8gns0IiZgUXyK64bjxxA1bqqPjGpkYtkje6BYtkjR1S98U3MsoTiZv3x/amnmd26sHEpcxO17qSsxgbfyUeXesXS6eu5FrXyNCVt2x//KUTF1WM1t3aSoBAFgJSVxbK+iDQDYHkFcbhZe7rmrgrEW4aFwG7coUxvjd/pdNKJo/EwOqp6+o7dh5IpZ82mWmxzyPj1wRN/eG+5q/vJRfD6utG6ViT+C5ktfXkaDtMXUnw61NfGl2cTPwqPbzUvwj04o7LwpWz+usr1t44/vCLacC4TS8IEMiJgNSVk0IYBoFlBdK3B938eHyjUvOxqVvv4wtVyRCQvkzZ9t1kFGgsGjV322ZLMsfEz6a/97a7djz4+DOH32k9kcRR799xKN14MoY2njeR2Ctxr1UdIZlHg1DVODLRb6dXGCdvC29+bznIRr/R30nquHCLO5347rb0g3AbOSz9pYHFg7wiQCAXAlJXLspgEARWInB4PLzPemRo49cOr/B5XVfvOFzvIBndVpm6WiSVpcffNscseVjiqPRQK5XK8mtdx77e4l/XOXBPeMt8iwSZ6DdMXU/eG1wPbfoOQSLhxe++tP39yyzIpQyWTV3V/d986eHPbl4bP6yrkbqGhkdGH8nFv7yZmpRfCRCIBKQuZwKBwRF47VvXBZ+vQ8MjqfvHo5lMv/y11LPpw0/iRKRYs20izG3nvj8Wth8EnWQUyEnqWva+rplH/yxIPI3JJudy1Z1PJ0+AuYN3hzkpTF3JLzSk7jNL3XHVuKR76uE/DMYwfNODy33zIDm8FmtdwXBnTr88uXPrDWEWHGp1pTU4xEsCBPopIHX1U1/fBC5W4MA9Tf/0dfTvMJ48M33uzIlf15ZAwktawyOpR9gnr6+NjH03vhlo5vD94crQyFDeU1flxEM3hYFjzdbEQ7lOTyS/jfhn++s3nR9Ifqlw49cOL96NfubAF5IIYer69Z71YSoN/22AN/en/rGmRupKD3Lo+h0Hgq9MViozJybGx4PvMCbvRRtJ3ew/cf+9jx1NX5BNPJstqNrFnl32J0Cg1wJSV6+FtU+gqwJzR5oe/hQupTS9XjO2M/mludSH+tDwFevaPaoq+PxOLsDkZK2rUjk3eVub53WNRf9w0GJICr4B0PyUr3dvGr313tvu2jZ69RVhjKu9DiebultuZOjKzVvuuve2W1tc7ItTV4tBrrnqshu31Z7Xdcf62j9htLhzpVJJXsccWrPhulurXdy8u/ovAtWWKq9Yt3XPYwePVx/TdfL4ocd3jIYIK7h1rKtnpMYIELgIAanrIrDsSiAXAu8c3J5+6mZT2IrSxrtv2vlyel2kcouVaVAAACAASURBVG7/zanFsDiaXD62Pry4lv/UVamcnri33a3lYX5av+tIWLv0MlgsUH2xYU0YYsK1rkolfWtdcOD66xOLZGGQWnaQ4c7tChRd0ExcIA56b0w2CJfhhL0mQCAfAlJXPupgFAQuSmDuzIFdtyydNtZ+fE/yStZiB60zx7tv2vnKkXb/NnZO17pqczr93J6xJf4dxjU33PZEixvMD389cXWykVquuO7rRx7bGqbYcK2rUmmz1lj98mDru+nr7Kef25NYkUoGpkTqqlSO7W4xtuVT15ob7nwyvl68WG6vCBDIj4DUlZ9aGAmBixQ4d2TyoR1jo5vWxv/Q9Zqr1l6zecv4vkMnm5a4km2ffnHf3TfecEm06HXphvWf2neoerPRCp8ckQwiyZZb/pZcoVnp4YmjgoW3Fl3MnTn0+J7bbrxhkeLdV60dvWP7Q88cW7xnK33c9NH9d994w5rom4Brrlo7uu3B2jPDEv0m17qqTUwfn9i1LboyODR8xZpr7tj+5PFqJ0umruqBc2cO7a3Wq95j9d8v2lAd5N6XToffaagN8/SBb20Z3VAvULWXzeMHqjWdPvrMg+PbRq9JN3L3rsklZpqeud8JEOiTgNTVJ3jdEiBAgAABAiUTkLpKVnDTJUCAAAECBPokIHX1CV63BAgQIECAQMkEpK6SFdx0CRAgQIAAgT4JSF19gtctAQIECBAgUDIBqatkBTddAgQIECBAoE8CUlef4HVLgAABAgQIlExA6ipZwU2XAAECBAgQ6JOA1NUneN0SIECAAAECJROQukpWcNMlQIAAAQIE+iQgdfUJXrcECBAgQIBAyQSkrpIV3HQJECBAgACBPglIXX2C1y0BAgQIECBQMgGpq2QFN10CBAgQIECgTwJSV5/gdUuAAAECBAiUTEDqKlnBTZcAAQIECBDok4DU1Sd43RIgQIAAAQIlE5C6SlZw0yVAgAABAgT6JCB19QletwQIECBAgEDJBKSukhXcdAkQIECAAIE+CUhdfYLXLQECBAgQIFAyAamrZAU3XQIECBAgQKBPAlJXn+B1S4AAAQIECJRMQOoqWcFNlwABAgQIEOiTgNTVJ3jdEiBAgAABAiUTkLpKVnDTJUCAAAECBPokIHX1CV63BAgQIECAQMkEpK6SFdx0CRAgQIAAgT4JSF19gtctAQIECBAgUDIBqatkBTddAgQIECBAoE8CUlef4HVLgAABAgQIlExA6ipZwU2XAAECBAgQ6JOA1NUneN0SIECAAAECJROQukpWcNMlQIAAAQIE+iQgdfUJXrcECBAgQIBAyQSkrpIV3HQJECBAgACBPglIXX2C1y0BAgQIECBQMgGpq2QFN10CBAgQIECgTwJSV5/gdUuAAAECBAiUTEDqKlnBTZcAAQIECBDok4DU1Sd43RIgQIAAAQIlE5C6SlZw0yVAgAABAgT6JCB19QletwQIECBAgEDJBKSukhXcdAkQIECAAIE+CUhdfYLXLQECBAgQIFAyAamrZAU3XQIECBAgQKBPAlJXn+B1S4AAAQIECJRMQOoqWcFNlwABAgQIEOiTgNTVJ3jdEiBAgAABAiUTkLpKVnDTJUCAAAECBPokIHX1CV63BAgQIECAQMkEpK6SFdx0CRAgQIAAgT4JSF19gtctAQIECBAgUDKBXKeuBT8ECBRaoGT/vTVdAgTKLpDH1LWwsPDGG288/fTT3/nOd77xjW/sbvw8//zzMzMzZa+Y+RMolkCUKos1J7MhQIBAa4F8pa7ov79TMwunpxJ/3pma/6fv/s9du3b98If/Nj093XoqthIgMJgC8XLeYA7fqAkQILBSgRylrui/vK+9Nf/yby6k/hx5Y/YfH/3e888//8ILL/zLv/zL+fPnVzo/+xEgMAgCgtcgVMkYCRDoVCAvqSv6b+6rb5z52dHZn/5q5tVTc5VKZWpm4ae/mvnpr2ZeODb9zb3V1FWpVI4ePfroo4+eO3eu06k7ngCBPAkIXnmqhrEQINATgVykrvoq12uvPffyyef/9+wbb1cjV/QzN1957sj0z49NP/TN73yp8fPlL3/lq1/9WmMXfxMgUBCB+fn56L8GBZmPaRAgQCAp0P/UFf1Hdm5u7qmnfvDsyyfPTi+EI5ybX/jpK+cPHD77q9emT7w99+aZubfPzr/6m1O7dn013M1rAgQKIDA/Pz83Nyd4FaCUpkCAQEuBXKSu+fn5CxcuPPLIN5996eSzvzr/o5fPpf785JdTzx05f+jo+Z8fO//S8Zmfv3LygQf+vuV8Ot84sXVkaOtk5+0stjCxbWh4bOfR2obq620T0cuud7TYpVcEBlJgYWHhwoUL0YrXQE7AoAkQILCkQJ9TV/Q/aufn52dnZx944O8P/OLEM7+c+uEvzi7x55lfTh34xYmdOx+oVCa3DI8MhX+6kZakriVPGG8S6K3A7Oys64y9JdY6AQL9E+h/6ooWuqanp3fs+LtnXnzjJ4enJl84s8SfnxyeOnJyviZWTV1booWj7gl2P3WFY7PWFWp4TaBJYHp62nJXk4oNBAgURCAXqWt2dvb8+fP33/+3z7zwxo9fmnrq0Jnan3f2Hzz1nZ8e+u6zL/2vg282Np554Vj82AipqyBnoWkQiAXOnz8fL3fFG70gQIBAMQT6mbrCy4vnz5//m7+5/ycvvPFvL579X8+/ve+ZFz/7r1/40CO/N7p3ZHTvyJ/sG/37H+z752eP//I3ceSqRFcYW6x1RetJ1f87MrRxz7FaoY7tGouvRQaHHN+5Mb5GWb/1KlrrCvav34aVrvfRPeuGx3ZO7FlXv8RZ2y3qtLolOCpY36oEr8NFtai7dbuOp3vxO4GSCYSpa2Eh8d2akkmYLgECBRTof+qam5ubnZ2dmpr68pe/8qOfvz75wtn/8fRTt3//v43ufU8UuaL/++G9Vzx++FtRBU6ePPnlL39lydSVuCO+mmka8atSTUv165LJ3LMtuuG9unF4pBGAarGs5e1itXYazTbSW72X5FFB0mqduqo7NG63L+A5ZkoELkJgampqdnbWNxkvgsyuBAgMjkAuUtfMzMy5c+fGx7/8o5+//u2f/vjPH/9EKnKN7h358fH/P1J9Z2rhRz9/fXz8y81309cXsdIhZnJLMtM0wlY1GzXS1WLFqu/GEa1SCXPS4k7Vp7XW1rqibybWdwuSU5ukFbZWH0Z1z+7fnZYYql8IDI7AuXPnZmZmpK7BqZiREiBwEQJ9Tl3R43lmZmbOnj37xS+O/+uzL2ybuP3D374iXOUa3Tty5O2Xoznt/9U///VT4/928Ddf/OL4kmtdwQW+aFEq/KrjcGMlrJZ4ElcDK5VGJmsghvmpsa36d4vUFXQaHtXmdS3ejcULb2HbXhMorcDZs2ej1OX5EaU9B0ycQIEFcpS6vvCFL/73793+X/7pA2Hk+tg/XXV29kxUgAd/9sXqPV7f2/T5J+77/Bf/+mJSV5CHmorZuIWrvk/2qat5va1pjDYQKIuA1FWWSpsngVIK5Ch13XvvX91zz2fuvvue+E/toVz1sjzyyDfj7ffc85nPbd++0tRVe6xXcAd9yzovfh0y09S1dTK6z0zwalkVG0soIHWVsOimTKA8AjlKXRePvhiVEseGV/Rqb1SDVPClwmO7ohvnj+/cWv+GYxjgsk5d9YuVLe4wS0zKLwTKISB1laPOZkmgpAKlSF3Vm91r30ysPzyi/p3ExhcPa7d8xYthfUhd9Zvxk3fxl/SENO2yC0hdZT8DzJ9AoQUGOnUVujImR6CUAlJXKctu0gTKIiB1laXS5klgIASkroEok0ESILA6AalrdW6OIkCgJwJSV09YNUqAQD4EpK581MEoCBCoCUhdTgQCBAosIHUVuLimRmDwBKSuwauZERMgsGIBqWvFVHYkQKD3AlJX7431QIBA3wSkrr7R65gAgWYBqavZxBYCBAojIHUVppQmQqAIAlJXEapoDgQItBGQutrA2EyAQD8EpK5+qOuTAIGMBKSujKB1Q4DASgSkrpUo2YcAgQEVkLoGtHCGTaCYAlJXMetqVgQI1ASkLicCAQI5EpC6clQMQyFAoNsCUle3RbVHgEAHAlJXB3gOJUAg7wJSV94rZHwESiUgdZWq3CZLoGwCUlfZKm6+BHItIHXlujwGR4BAZwJSV2d+jiZAoKsCUldXOTVGgEC+BKSufNXDaAiUXEDqKvkJYPoEii0gdRW7vmZHYMAEpK4BK5jhEiBwMQJS18Vo2ZcAgR4LSF09BtY8AQL9FJC6+qmvbwIEUgJSVwrErwQIFElA6ipSNc2FwMALSF0DX0ITIECgvYDU1d7GOwQIZC4gdWVOrkMCBLITkLqys9YTAQLLCkhdyxLZgQCBwRWQuga3dkZOoIACUlcBi2pKBAg0BKSuhoS/CRDIgYDUlYMiGAIBAr0SkLp6JatdAgRWISB1rQLNIQQIDIqA1DUolTJOAqUQkLpKUWaTJFBWAamrrJU3bwK5FJC6clkWgyJAoDsCUld3HLVCgEBXBKSurjBqhACBfApIXfmsi1ERKKmA1FXSwps2gXIISF3lqLNZEhgQAalrQAplmAQIrEZA6lqNmmMIEOiRgNTVI1jNEiCQBwGpKw9VMAYCBOoCUpdTgQCBAgtIXQUurqkRGDwBqWvwambEBAisWEDqWjGVHQkQ6L2A1NV7Yz0QINA3Aamrb/Q6JkCgWUDqajaxhQCBwghIXYUppYkQKIKA1FWEKpoDAQJtBKSuNjA2EyDQDwGpqx/q+iRAICMBqSsjaN0QILASAalrJUr2IUBgQAWkrgEtnGETKKaA1FXMupoVAQI1AanLiUCAQI4EpK4cFcNQCBDotoDU1W1R7REg0IGA1NUBnkMJEMi7gNSV9woZH4FSCUhdpSq3yRIom4DUVbaKmy+BXAtIXbkuj8ERINCZgNTVmZ+jCRDoqoDU1VVOjREgkC8BqStf9TAaAiUXkLpKfgKYPoFiC0hdxa6v2REYMAGpa8AKZrgECFyMgNR1MVr2JUCgxwJSV4+BNU+AQD8FpK5+6uubAIGUgNSVAvErAQJFEpC6ilRNcyEw8AJS18CX0AQIEGgvIHW1t/EOAQKZC0hdmZPrkACB7ASkruys9USAwLICUteyRHYgQGBwBaSuwa2dkRMooIDUVcCimhIBAg0Bqash4W8CBHIgIHXloAiGQIBArwSkrl7JapcAgVUISF2rQHMIAQKDIiB1DUqljJNAKQSkrlKU2SQJlFVA6ipr5c2bQC4FpK5clsWgCBDojoDU1R1HrRAg0BUBqasrjBohQCCfAlJXPutiVARKKiB1lbTwpk2gHAJSVznqbJYEBkRA6hqQQhkmAQKrEZC6VqPmGAIEeiQgdfUIVrMECORBQOrKQxWMgQCBuoDU5VQgQKDAAlJXgYtragQGT0DqGryaGTEBAisWkLpWTGVHAgR6LyB19d5YDwQI9E1A6uobvY4JEGgWkLqaTWwhQKAwAlJXYUppIgSKICB1FaGK5kCAQBsBqasNjM0ECPRDQOrqh7o+CRDISEDqyghaNwQIrERA6lqJkn0IEBhQAalrQAtn2ASKKSB1FbOuZkWAQE1A6nIiECCQIwGpK0fFMBQCBLotIHV1W1R7BAh0ICB1dYDnUAIE8i4gdeW9QsZHoFQCUlepym2yBMomIHWVreLmSyDXAlJXrstjcAQIdCYgdXXm52gCBLoqIHV1lVNjBAjkS0Dqylc9jIZAyQWkrpKfAKZPoNgCUlex62t2BAZMQOoasIIZLgECFyMgdV2Mln0JEOixgNTVY2DNEyDQTwGpq5/6+iZAICUgdaVA/EqAQJEEpK4iVdNcCAy8gNQ18CU0AQIE2gtIXe1tvEOAQOYCUlfm5DokQCA7AakrO2s9ESCwrIDUtSyRHQgQGFwBqWtwa2fkBAooIHUVsKimRIBAQ0Dqakj4mwCBHAhIXTkogiEQINArAamrV7LaJUBgFQJS1yrQHEKAwKAISF2DUinjJFAKAamrFGU2SQJlFZC6ylp58yaQSwGpK5dlMSgCBLojIHV1x1ErBAh0RUDq6gqjRggQyKeA1JXPuhgVgZIKSF0lLbxpEyiHgNRVjjqbJYEBEZC6BqRQhkmAwGoEpK7VqDmGAIEeCUhdPYLVLAECeRCQuvJQBWMgQKAuIHU5FQgQKLCA1FXg4poagcETkLoGr2ZGTIDAigWkrhVT2ZEAgd4LSF29N9YDAQJ9E5C6+kavYwIEmgWkrmYTWwgQKIyA1FWYUpoIgSIISF1FqKI5ECDQRkDqagNjMwEC/RCQuvqhrk8CBDISkLoygtYNAQIrEZC6VqJkHwIEBlRA6hrQwhk2gWIKSF3FrKtZESBQE5C6nAgECORIQOrKUTEMhQCBbgtIXd0W1R4BAh0ISF0d4DmUAIG8C0hdea+Q8REolYDUVapymyyBsglIXWWruPkSyLWA1JXr8hgcAQKdCUhdnfk5mgCBrgpIXV3l1BgBAvkSkLryVQ+jIVByAamr5CeA6RMotoDUVez6mh2BAROQugasYIZLgMDFCEhdF6NlXwIEeiwgdfUYWPMECPRTQOrqp76+CRBICUhdKRC/EiBQJAGpq0jVNBcCAy8gdQ18CU2AAIH2AlJXexvvECCQuYDUlTm5DgkQyE5A6srOWk8ECCwrIHUtS2QHAgQGV0DqGtzaGTmBAgpIXQUsqikRINAQkLoaEv4mQCAHAlJXDopgCAQI9EpA6uqVrHYJEFiFgNS1CjSHECAwKAJS16BUyjgJlEJA6ipFmU2SQFkFpK6yVt68CeRSQOrKZVkMigCB7ghIXd1x1AoBAl0RkLq6wqgRAgTyKSB15bMuRkWgpAJSV0kLb9oEyiEgdZWjzmZJYEAEpK4BKZRhEiCwGgGpazVqjiFAoEcCUlePYDVLgEAeBKSuPFTBGAgQqAtIXU4FAgQKLCB1Fbi4pkZg8ASkrsGrmRETILBiAalrxVR2JECg9wJSV++N9UCAQN8EpK6+0euYAIFmAamr2cQWAgQKIyB1FaaUJkKgCAJSVxGqaA4ECLQRkLrawNhMgEA/BKSufqjrkwCBjASkroygdUOAwEoEpK6VKNmHAIEBFZC6BrRwhk2gmAJSVzHralYECNQEpC4nAgECORKQunJUDEMhQKDbAlJXt0W1R4BABwJSVwd4DiVAIO8CUlfeK2R8BEolIHWVqtwmS6BsAlJX2SpuvgRyLSB15bo8BkeAQGcCUldnfo4mQKCrAlJXVzk1RoBAvgSkrnzVw2gIlFxA6ir5CWD6BIotIHUVu75mR2DABKSuASuY4RIgcDECUtfFaNmXAIEeC0hdPQbWPAEC/RSQuvqpr28CBFICUlcKxK8ECBRJQOoqUjXNhcDAC0hdA19CEyBAoL2A1NXexjsECGQuIHVlTq5DAgSyE5C6srPWEwECywpIXcsS2YEAgcEVkLoGt3ZGTqCAAlJXAYtqSgQINASkroaEvwkQyIGA1JWDIhgCAQK9EpC6eiWrXQIEViEgda0CzSEECAyKgNQ1KJUyTgKlEJC6SlFmkyRQVgGpq6yVN28CuRSQunJZFoMiQKA7AlJXdxy1QoBAVwSkrq4waoQAgXwKSF35rItRESipgNRV0sKbNoFyCEhd5aizWRIYEAGpa0AKZZgECKxGQOpajZpjCBDokYDU1SNYzRIgkAcBqSsPVTAGAgTqAlKXU4EAgQILSF0FLq6pERg8Aalr8GpmxAQIrFhA6loxlR0JEOi9gNTVe2M9ECDQNwGpq2/0OiZAoFlA6mo2sYUAgcIISF2FKaWJECiCgNRVhCqaAwECbQSkrjYwNhMg0A8Bqasf6vokQCAjAakrI2jdECCwEgGpayVK9iFAYEAFpK4BLZxhEyimgNRVzLqaFQECNQGpy4lAgECOBKSuHBXDUAgQ6LaA1NVtUe0RINCBgNTVAZ5DCRDIu4DUlfcKGR+BUglIXaUqt8kSKJuA1FW2ipsvgVwLSF25Lo/BESDQmYDU1ZmfowkQ6KqA1NVVTo0RIJAvAakrX/UwGgIlF5C6Sn4CmD6BYgtIXcWur9kRGDABqWvACma4BAhcjIDUdTFa9iVAoMcCUlePgTVPgEA/BaSufurrmwCBlIDUlQLxKwECRRKQuopUTXMhMPACUtfAl9AECBBoLyB1tbfxDgECmQtIXZmT65AAgewEpK7srPVEgMCyAlLXskR2IEBgcAWkrsGtnZETKKCA1FXAopoSAQINAamrIeFvAgRyICB15aAIhkCAQK8EpK5eyWqXAIFVCEhdq0BzCAECgyIgdQ1KpYyTQCkEpK5SlNkkCZRVQOoqa+XNm0AuBaSuXJbFoAgQ6I6A1NUdR60QINAVAamrK4waIUAgnwJSVz7rYlQESiogdZW08KZNoBwCUlc56myWBAZEQOoakEIZJgECqxGQulaj5hgCBHokIHX1CFazBAjkQUDqykMVjIEAgbqA1OVUIECgwAJSV4GLa2oEBk9A6hq8mhkxAQIrFpC6VkxlRwIEei8gdfXeWA8ECPRNQOrqG72OCRBoFpC6mk1sIUCgMAJSV2FKaSIEiiAgdRWhiuZAgEAbAamrDYzNBAj0Q0Dq6oe6PgkQyEhA6soIWjcECKxEQOpaiZJ9CBAYUAGpa0ALZ9gEiikgdRWzrmZFgEBNQOpyIhAgkCMBqStHxTAUAgS6LSB1dVtUewQIdCAgdXWA51ACBPIuIHXlvULGR6BUAlJXqcptsgTKJiB1la3i5ksg1wJSV67LY3AECHQmIHV15udoAgS6KiB1dZVTYwQI5EtA6spXPYyGQMkFpK6SnwCmT6DYAlJXsetrdgQGTEDqGrCCGS4BAhcjIHVdjJZ9CRDosYDU1WNgzRMg0E8Bqauf+vomQCAlIHWlQPxKgECRBKSuIlXTXAgMvIDUNfAlNAECBNoLSF3tbbxDgEDmAlJX5uQ6JEAgOwGpKztrPREgsKyA1LUskR0IEBhcAalrcGtn5AQKKCB1FbCopkSAQENA6mpI+JsAgRwISF05KIIhECDQKwGpq1ey2iVAYBUCUtcq0BxCgMCgCEhdg1Ip4yRQCgGpqxRlNkkCZRWQuspaefMmkEsBqSuXZTEoAgS6IyB1dcdRKwQIdEVA6uoKo0YIEMingNSVz7oYFYGSCkhdJS28aRMoh4DUVY46myWBARGQugakUIZJgMBqBKSu1ah19Zgzk5+9YWh4ZOjKWx4+2tWGl2msqd+je9YNj1RHMrxtonbsxNbo15F1u44v05i3CXRJQOrqEqRmCBDIo0ChU9e5I4+N37H+yqtqSWJk6NIN624cf/jFMzmrw+SWetZZXbg5vnNjPRtVp/n+HYeapnfgnoZAkKgqlaZ+pa4mOhuyF5C6sjfXIwECmQkUNnWdfvq+y9YEcaSRbIaGR9betf/0XGbCy3Y0c+ALnax1JVPX8KbtB5M9zj1z56WhQ30dq1Jp6lfqSsr5rS8CUldf2HVKgEA2AsVMXdNP3rumHrM2jO565sR0pVKZOfH0ntHL6/ljzV2T1W1F+GmkrkuvuqQ25bVfeCkxrafvi7bXF/waVw8T+0S/SF0tUGzKWkDqylpcfwQIZChQxNQ1d/DORrpav+tIAvOVrzVuXdq0/cXEOwP7SyN1bdw8Gq1pXXrfgWAyk5+qXV68fmx9PYbGa13BTtFLqauJxIbsBaSu7M31SIBAZgJFTF0T2xo3ct03mb6SOPPolvpy1yX3RJfiGqlleGTLxJkD45svWTMytHWyVoCZY4/ft/49V1RbW3PVZVv2PLx9rN5yfYdK5c2XHh6/Y/37N1SPGq7eOrZ+67cOvBOXL2j8iTMT45svqwejG27edfB0fa9gn+gm9tr26aOTO7fetDa+OPieG257/FTcbvCicfjGr+2s3/x+1Z1x7Jqb3FIb2GXjOxp3j8Wpq3FgdeK19pZNXXMvbb+6sVi4dbIx/mAsXhLoWEDq6phQAwQI5FeggKlr8ebxW/c3wx/b1UhOG/ccq769GD7Gtm6rX4yrhqozE3dtqGes4J6wVOpabC3c5/L7DtTT3mLjay5Pt3bz4zO14S3uU08/lcrpiXvXNt2U1uaLhPHh2x5rxM1LPvVMfeL1LZu2vxjfO7/61HXoC5vq01+cYDOwLQQ6EpC6OuJzMAEC+RYoYOqKn3eQvsMpqsQTjZWwNffWlnji1FJfxakGi62TlQP31e8MWzO2/UD1a4+nD+xYHyehxlrXsd133Dw+efjNWn6aPvXY1nq0GvtuOlENbRw/8GalMn3q0Vsb3yjc8v3avWWLA4jXnBodJW5Ku3tvy8c3xIdvm4hvnL80WuSbefTPapO6esfhxW8srjZ1xRdn14ztfCXfJ7XRDbKA1DXI1TN2AgSWEShf6mosCA3V73+KU8vI0OV3PHy0mpamz83EC2ZhdFtc7GmkrkXdc2dOnDwyuf2maDWoxeXLJxr7PnlvfcWoabEtSl1xL4tLVo1DW/0dj78ap+p3cQ1fdefTlcq5/TfXYmJtkazDta7JndfXU+nYd/P26I1WKrYNrIDUNbClM3ACBJYXKGLququxarX0Fcbrm64w1heoapcdGyEjvupXqVQWryfGqeu1/XffeEP9pq7wImN9hzgSNe6dqlQq8e1TrVPX8TjfhF23r2TcRW0R60D9G4uXfOqZ04/fUYt3YzurD1/tKHVdEl8e/bP9budqXwvvdEFA6uoCoiYIEMirQAFT14lHNtcXk+oX2kL7xbvpLxuPnrAQp5YgGAU3e4XRJ526zk3e1viy5NClG9Zes3nLjY07nzpJXY2nnoZdh3NIvo7HH106fGn7+2uh89L77oxurv/wt05UD+goda3bcsdlUaZcc8eji98VSA7EbwS6ISB1dUNRGwQI5FSggKkrvrI2NDwy+kjye3/xzUnDNz34WlSSOLWEqavy2K31BbPwHvb4jrH6lxwbs/ouyAAACYhJREFUt4it/ezB+tO/Glsa34Js1fgya12VicZa3SquMFYqlcPjjeRXu7x43UORQGepa9eRA/c0vg1QXyPM6QltWIMuIHUNegWNnwCBJQSKmLqq3wHc1rgXfsPNe4+cbnpKavAcr1bBqFI5sfeW+oLZmrHxF2eqD1l9vNFmdLt9pRIvql1ya+1h928e3N64LtlJ6lq8kX+4Mfi5mcNPjt+2e7m76aM6v7ijvi5VXZ2Kw2WHqet4JVjYC/SWOLW8RWA1AlLXatQcQ4DAgAgUM3VVl3weuaX54Qu1IHXFdePxs7IST45IXNGbOxLfX1WPX9WndjXuGIsuIL62bzTeUrsAt/h4iNVfYayeOIfjx1sE94qFq27B2RWnxhZfThyqX17s+Apj7V+/Dp743/TvDgUD8pJAJwJSVyd6jiVAIOcChU1dVfc3Dz541+Z10WNOh0eG3rNp/dY9E7VvKQZViVNL4gpjdYfp44996qY1764lrfdUn2t6IA5D9VBVOf3cnpuvrj1G9d2bRj+7/0Q3rjBGYzt94FtbRhsPX11z1dprNo8fiJ5GEYy9+jIef5y6Fu/6Dy6wdrzWVe3rzGPxYy8uv3fiXGokfiXQBQGpqwuImiBAIK8ChU5d3UaPHyfRZtmp2/1pj0D5BKSu8tXcjAmUSEDqWnGx42eQDhfm33Bc8dztSCArAakrK2n9ECDQBwGpqx368Qe33vfwwVPTtX/bZ/q1Z7Z/uHYlcXhk6A/31Z7F0O5A2wkQWL2A1LV6O0cSIJB7AamrXYni+6Uad9BHN7Z/4L5JD6xqZ2Y7gY4FpK6OCTVAgEB+BaSudrU5NbF987orG/9m4pqr1o7esX3vS6fr/6x1u6NsJ0CgIwGpqyM+BxMgkG8BqSvf9TE6AiUTkLpKVnDTJVAuAamrXPU2WwI5F5C6cl4gwyNAoBMBqasTPccSINBlAamry6CaI0AgTwJSV56qYSwESi8gdZX+FABAoMgCUleRq2tuBAZOQOoauJIZMAECKxeQulZuZU8CBHouIHX1nFgHBAj0T0Dq6p+9ngkQaBKQuppIbCBAoDgCUldxamkmBAogIHUVoIimQIBAOwGpq52M7QQI9EFA6uoDui4JEMhKQOrKSlo/BAisQEDqWgGSXQgQGFQBqWtQK2fcBAopIHUVsqwmRYBAJCB1ORMIEMiRgNSVo2IYCgEC3RaQurotqj0CBDoQkLo6wHMoAQJ5F5C68l4h4yNQKgGpq1TlNlkCZROQuspWcfMlkGsBqSvX5TE4AgQ6E5C6OvPr7tFH96wbHtt5tGuNTmwdGdo62WlzE9uGVjKq6uBHtkx02lvPj69OZ2RFM+r5UHTQQkDqaoFiEwECRREoZOqa3FL9WN2WDgDVj9umjR0Wsv4RPlL7IF/8v+t2HV9Nw1JXk1o1OFarWfuTSJBRlavbw6h3bNfY0MY9x5raqW/otnC7fmxftYDUtWo6BxIgkH+BAqeupmWeXqSupgpXU8ISn/pN+yc2dDsTdGetKzHEbH85umfLYn6txqw4zk5sbbwOy1oFXDJYhzu3n8oy0a39gd7pXEDq6txQCwQI5FagsKlry66mC14r+8TtqFTVLjq4RCh1LakfhKHJLYvp6vjOjXXzia2Jda8Wja3sHAg6atGGTT0VkLp6yqtxAgT6K1Dc1DVRqX52Ln42VyrpT9zFS1SN1anjOzc2VlCqZUksrlSWXUdJ7V8rbG0M9Qtk4YWw8MLZ4vZk6lpyn8XBx8s/tQ4Xtw9tnVxyrSvYM5EUI4TJnRsbF++SbsGMxnZOBDeihYOvv17sIjnIVZ7zQRhqlbomti19E1voWa94dZz16oTnwOI1zVUvW65yig6rSF1OAgIECixQ5NRVqVQDxOIncSI9VANBHHeqn8e1z9fgcz1KaYuXCxNvtToj4kbiNxOH1D7g6z2GF86qo2osjyWDy+LFtfQ+wV1r4VvJ2FftfTiYfjysSqWWIIN8WW0k1qihxUOqJNJqrc34El4UqloPvpZmGnsmBhmO46Jeh5k4eD2xrVa7MIe1b7Y6ksaoKpWJrfHrxKmSKFz7xrzTCwGpqxeq2iRAICcCxU5d9XhRzzrBJ271YzW8NTuOO9UX9U/i6p1DW7clfl28xyhdvlocaeSP+puTW8LsUv2Mb5mBgvwXDyPdfGqfOCFVomQZrSQ1Z4U2PbYYSbBnIn9UB7LoFgwjGmE44PTr1oNMz2zFv1dH2ChN7aAo89W/jVi/zas6hmjtKo5TyQ4W55LcXqmtjDZWtpol03v7vWcCUlfPaDVMgED/BYqeuqJP0+jTOvjErX2ENy4t1T+no8wUB4vobqFqBKmFtnSESpSu9mGfvoi2mACCjuKol3w3XgNLPDliJfvU1vOirhdvMG8MLshSjU3Vv+NJBRvrK0aJGFd/O3arjieZZqpbGlmz3etGj2mf6vY4OS39KIfabo1IFAy68XJxuas+mHSqbuwYJMj6psSZ0OhC6orBsn8hdWVvrkcCBDITKH7qWrzOGKeH2rJTqxBQZa8nlaN71sXXHLdOVj+tGx/JTbVpWhyK9mjOKI0ja5/0jbBSSx7NqWsl+9Taq/Y+sKmrIbLE31XG4GJoiz0bgTi8r6tRvvTuwTkQXWaNlzzDpBW+Trfg9x4LSF09BtY8AQL9FChD6mpcZ6x+q7G+VLPUx2otYE3sGqvHstrn9+KvTcWqNtVoNvlmvGyW3BzGrOo7wW6Ly0XBxrb7RM0mUlcyGraJg3GyDMZVz5rVLYsN1t+Pk0otANUDYvRe9a1GfFwcfATe2N6yzXrTy/3V3GPTEYsrfBeZulLnQPhr+LqpQxt6KyB19dZX6wQI9FWgHKmrfp0xuDGo9nEeLHdNbomv/dUve8WhoXZ5a/Fm82S5lowFtfWqxUtyx3Ztqz13PhFravs0boFaDC4r2ScaSbBnclK1ONjyTrKWd9PH8w0ajHqIU1cU1xYjZiTTOHBx8F1LXdUpLNYlKR+PLV6DDAYQhMjkUcFcElcba3SLmTXcLdmA33otIHX1Wlj7BAj0UaAsqat+nXExMdSTR+MZAY3oUCtF9TM7/ixPR42wWNWA0mgh8SLOc1Goqu8TB4joM752P9mWiWBZK8gN9etfS+9THUsyJAUtr9t1vG34aHyNsTH4xWiYbjBxN321v2BG2ybSA24whtubBxkSLvk66GuRd3GxLd1L42unw4nyJXpIxqnF9jfumUg81L5R2eA0SLTjl54JSF09o9UwAQL9F/g/7jZLAjzT+pkAAAAASUVORK5CYII=">|
| Kiwi-8B-Preview | <img style="height: 250px;" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAx8AAAIpCAYAAADD8IWhAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAGCiSURBVHhe7d13fFP1/sfxd5ImXdBSkLIKZYMIsvcSEQeiCAiIqFcBERci4k8BB7gXgjguCoILARUcIKgoey+RLVA2FCiFMjrTJL8/oLnNaUunx7a8no9HHtd+xzlN6D2fvHPO98QSeXVLjwAAAADgH2Y1NgAAAADAP4HwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDQL4snDtDB3as0ZsvjzJ2XVF4HQAAyB7hA0VOz9tv0fb1i7Tnr+Ua/vhgY7fatGymNYvnZtlfXEyf+r4O7Fjj89i9ebl++WG6etx2s3E48mHqf8fpwI412rt1pUY/PdTYfcV48IG79dtPX2v35uU6sGON9m9fre0bFmvmZx+pWtUqxuEAAGRA+ACKuFSXS/HxCYqPT5DVZlXd2jU09rmn1L9vT+PQIunl55/Wqj9+0OefTDB2maJNy2aqV7e2JMnPZlPrlk2NQwqFLtd30HdffaK1S+ap5+23GLvzpUK5cH328Xg9O/wx1a5ZXXa7nxKTkpScnKLgoEBVjaysCuXCjdMKzD/53CSpWtUqmvDWWG1csUCvjx1p7AYAFCDCB1DE7T9wSNc0v17XNL9egx4ZoegTJxVSsqS6d7vROLRIatWiiSpWKC8/P5uxyxTt27bUVWXCdPbceSUnp6hyREV173aTcdi/rsE1ddWwwdUqERxk7Mq3t14ZrY7tWskjj1at3aB+Dzyqq5tcp7pNOuqZ51/Tnqj9Sk1NNU4rMP/kc9OlcNWqeROVKR0mq9Vi7AYAFCDCB1CMLFm+WqtWr5ckVSxfTg3r1zMOQS41bdxAfn5+WrVmg6JPnFTJEsFq0bShcVixdW+/XmrS+Fq53R79OO9X3f3AY1qzbpO3f9bsn3TvoKFat3GzzzwAADJD+MAV5amhD2ntknmK2rrKu0bip2+mqUO7Vt4xwx8frD1/LdeaxXM1/PHBWr/sZ+3fvlo7Ny3RhLfGqkK5cE167w3t3LREB3as0V9rFuqFZ4f57KdWjWqa+t9x2r5hsXetwKKfZ6lvr9t9xv0TUl0uSZKfn5+Cg4N8ns/Tw4boz1W/avv6Rd7LV+7v31uLF3yrvVtXXvZ3rVAuXO+/87L3Of395zJN+fAdWSy+nxS/+fIoHdixRmsWz1Wbls0ytC+cO8NnfN9et+uXH6Z71xFEbV2lhXNn6M2XR2vPX8tVq0Y1SVLbVs195qfNS/97fz3tA59t59cNndqpRrVIJSQmas36Tdq2fZesVquaN22k4Cw+hbdYLJrw1ljv38f2DYs14a2xGcbn5G8xp6/lwrkzNPThAbLb7QoODtK7b7xYYGueOrRrpeCgQB08dFgffPyZsTtTOXlu6f8uHx18v9Ysnqv921dr79aVmj19svff/XLPrUK5cI17/QVtWD7fu6+//1ymLye/552fJqu/s8kfvK2vp32g8uXKescd2LFG06e+7zMfAFAwCB8osux2u4Y+PEAHDIuu07+RSG/c6y/okQf/o/CyZRRzKlZ7ovbL5Xbp2vpXa8IbY3Rb1y4+40NCSmjgfXcpMTFJZ+LOKsDfX91u7qxpH49Xpw5tdOzYCcXHJyg0pKTu7HGr+vXuLl0KHu+/87I6dWij+Ph47fp7r6KjT6ha1SoaOeKxDPspaGWvKiNJijt7TqvWbvC2Oxx29et9h8JKhXrbnh42RCNHPK6qVSJ07tx5/b0nSokJiapeLVIvjnpSQwbe6x371iuj1e2WGxQUGKDjJ2J0LPq42rVursjKlbxjcuupoQ9p7OinVLd2DbncLu2J2q+jx6Ll73AoPj5Bu3ZH6cKFeEnS6dNx2vX3Xu3bf1C3de2ikSMeU51a1XX69Bnt+nuvTp06ravKlDbuIl86dWijsFKhOhkTq2Ur12r9pr+UkJioShUrZLn2oHWLpup64/U6fvykTp+JU1BggG67pYteem6Ed0xu/xazs2//QR08fFRut1upqamK2ndQu3ZH6WTMKePQXKtaJUKStGPXHu0/cMjYnUFun1tISAk9Mvg/cjqdOnI0WjarVU0a1dczwx+VsnlurVs21U2dO8pqtWrP3n3aE7VfktSuTQvvfGXzd3bu3Hnt3rtPSUnJkqTo4ye06++9OnjoqHc+AKDgED5QZHk8HiUmJXkXW3sfCYlyudw+Y+/t10s3d+kkSZrz0wK16nSbutzWTw8/MVLRJ06qdOlSGe4QFeDvr4WLl6vDTb3U976HdeDQEfn5+alaZGVN+fxrde7WV6PGvqmz586rZIkSanDN1ZKkIQPvUe1a1bV63UZ16tpHN/e4Rzf3uEc7du1RqdAQ3dS5o89+CkqtGtX0xkuj1LZVc7lcbq3b8KdPf6nQEJ0+E6f7Bg/TNc2v1+kzcerb63Y5HHatXLNB7W/sqZu691fPux/Urr/3KigwULd17aLg4CDvpTcul0uzZs9Vq063qVPXPnprwn+9b9pyq3HD+rqj200KCPDXlm07dXvvB9Tltn7qcFMvdbipl156Y7xu632/ok+clCTt3L1HN/e4Rw8NfVa1alRTcFCgYk7F6qmRL+nmHveoc7e+eu/DT427ybPg4CA1ura+rFartm3fpf0HDmnZyrU6GROroMAANW+S+aVXQUGBGv7sGHXq2kftb+yplWs2yGq1qG2r5mrTslme/haz89DQZ/XjvF/lcrmUnJyiDz/5TLf1vl9fzZxjHJpn8QkJxqYM8vLcAvz9tXDRMrW/safa39hTi5etksViUd3aNdSwfr3LPrfjJ2L08dSv1P7Gnrq5xz3qcls/zVuw0Gd+dn9nT416SWNefVdxZ89JkpatXKube9yjUWPe8Pk9AQAFg/CBIis1NVWTp33tXWyd9njw0acVcyrWZ+zVdWopKDBA0cdP6Ivp33nb06+RqF2zus8aiXPnL2jhH8skSXui9utY9HFJUvSJk/ruh/mSpN8XL9epU6dlsVhks138v9M19erKarWqTctm2r5+0cVLb9Yv0jVXX7xjUljY/848FIRaNap5L8G5686LYWLVmvV6c/xHPuOczlTN+WmBlq1YI0lq2KCeQkqW1Nlz5zXjmx8UH3/xzeWeqP36fckKOZ1OVapYXte1a+19/Y6fiNGMb37wbnPqFzN1/GSM9+fcaNmsscpeVUZnz53Xp1/M9H5qnRMxp2KV4kxV2avK6MVRw/XwoItnaH7+9Q/j0Dy75cZOiqxcSQmJiVq/6S/p0uL+bdt3yWKx6NoG9TK9veymv7Zp3i8Xf4/4+AQtXLRMiUlJKlkiWOXLlc3T32Jh4GfLfsF/Xp7bmbizmrdgoffnHbv2yOl0ei8bvJxVazfo6LHjmjThdS37dbZ2bFysnt27SukuO8zP3xkAoOARPnBFsNmsslgsSkxK0l/bdvj0HY0+kembnaSkZJ2JO+szVpKcTqf38pP4+AS5Pb5nWdJCyJ9/bdOcnxZkeCxfuc5nfH6l3Wo3+sRJbd6yXa+9877uffAJb5hIc/bcOW3dttP7s81mk8WS+fPct/+gUlKcsvv5yd/fcdnXL6+CggJls1mVmJikmBjfsJidL2fM1qzvftS58xdUs3pVPTP8Ua3640eNeGKIcWiedWjTUsHBQQoKDNTY0U95L+vrdssNkqTy4WUzPYt1+vQZn5/Pn78gj9vj/flyr2VWf4v/pnPnLkiS6taple3vlJfn5nSmKiEhyfuz2+37/6fLeXrYEL0+dqTatGouu8Ou7Tt3a/OW7T5j8vN3BgAoeIQPXFECAwIyfKJcqUI52e12JSUney/xKQhn4s5q+LNjMzwmffqlcWi+pN1qt3Wn23XHXQM1edrXxiGSJLfbk+FyNEkKCPD3WQciSdWrRcrhuPianDt/3ttut9t9Pu2vVrWK7Ha79+fLCQ7K/I1rZvvPiZfemKA2nbvrlbfe0+69+xRSsoTuu7uXd+1NflSrWkX169WRxWJRUlJyhkv7Ul0uBQT4q0WzRsapGZ5n1cjKcjjscrvdcqV7Y52fv0XjPv5J6zZuVmpqqmpWr6pHHrzP2J2p/Dy3nKpWtYpuvfkG+fs79NsfS9W60+3qfc9D2r13n3GolI+/MwBAwSJ84Iqwbcffik9IVIXy5XRf/zu97de1b602rZvL4/Fob9SBHC2ozc6BQ0ckSc2aXOvzRX+NG9bX+DfHqEXTjG9Y/w2bNm/V2XPnFRpSUv363OH9NLpWjWq64bp2stvtOnrsuFav26QTJ09dvAyrQjn1vuNW7zbuvKNrhi+Xizl1Wk6nUyEhJVSnVnXp0uvctMm1PuPS77//XT0ybMfIavnf4apPz9vU5foOio9P0JTPZui1t9/XqdjTCgwIUIXy5Xzm5UWHti1Vvny4kpKSNeHDKRku7fvlt8WSpDq1aqhxw/o+c5s2uVbXtW8tXXotb7y+g+x2u2JOxWrLtp25+lvM6Wvpw2KRn5+fsTXP5vw4X/sOHJK/v0MD7+un18Y84/NvdW+/XvpyykS1aNooV88tT9I9twrlwhXg7y9JSri0HqVWjWoZQk9u/85ycnkZACDvCB+4Inw5Y7Y2bPpLVqtFPW+/RWsWz9XCuTP03/deV4Vy4ToWfUJfzZxtnJYn02fO8X7R35hRT2rx/G+0cO4MfT3tA7Vu0bRA3xjmx5Llq/XHkhVyud1q26qZlv82R7/+OF1zvp6sunVqKu7sOc2aPVfx8Qn6fu4v2rvvgOx2ux4aeK+W/Tpby36drQfu7Sun0+mz3Q2b/tLZc+cVFBio/xv+iP6YN0sT33lJ/naHz7gly1dr6fLVcrndatOymX7/eZYWzp3h3XbarWXT7tjUvFkjLf11tr776hM1bdxAH41/VYvnf6Nfvv9Kb70yWleVKa1z5y9o+87dPvvJi3ZtWigoMFCnYk9r9dqNxm7vXa/Cy5ZRpw5tfPpKBAXpv++9rj/mzfK+lsnJKfpp/kLtP3AoV3+LOX0tJenQkaPebxwf9fTj+mPeLD3xyEDjsFzbE7Vfb0+YpGPRxxUQ4K+7+/TQit9/0PYNi7Vr01K9/PzTqlWjmvz8/HL13HIjs+fWvGlDnYi5uN6o2y1dtHDuDE2f+r4iIy/enStNTv7O/tq2w3uGr9stXbR4/jea/MFbPtsBABQMwgeuGI88OUpTv5ipU7GnVS78KtWqUU1ut0eLl63SY089pyXLVxun5MmS5av1zPOvacu2nfJ4Ll4eUr1qpM6fv6Cff/k9w7Xw/6Znnn9N496bpKPHjis0JER1atWQw+HQn39t09CnX9D0WRfvlrT/wCGNfX38xeckjypHVFRISEnN/PbHDJfQLFm+WpM+/VIxp2IV4O+vyMgI7dl7QIuXr/IZJ0lPjXpZ496bpBMnTynA31+1alRT+XLhOn0mzrtmZfK0rxW176BsVquqRFSUv79D+w8c1pm4s6pSOUJ169RUqdBQ/b1nn1558z39+vsS425ypXHD+qpXp5YkaefuvZn+ey1buVbHj5+U3W5XqxZNfPqWLF+tPXv3q2pkZZUoEawTJ09pwoeTNeHDKd4xOf1bzM1r+d33P+un+QuVmJSkUqEhKl8+XKmpF7/zJb8WLlqm/wx+Uj/O+1VxZ8/JarUoOChQfnY/nYw5pUVLV3hfp5w+t9zI6rmNf3+y9u47ID8/P1WvGqnjJ2L0+6LlxunZ/p2lnUE7cfKUHA67qlSOkNVKeQSAf4Il8uqW/1sJCQAAAAD/ED7aAQAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMIWlZaceHmMjAAAAABQ0i8fjyVP4cLnd8ngkt9tt7AIAAABQxFmtVlksks1acBdL5Tp8eDweuT0euVyEDgAAAKC4s9msslosslgsxq5cy1WMScspbneu8goAAACAIirtvX8uz1lkKsfh43/Bw10gOwYAAABQ+Hk8Hu9Si/zmgByHD13aWX53CAAAAKBoKagckKPwkbaj9KkHAAAAwJUh/dVP+QkhOQofSpd28rMzAAAAAEVPQWWBbMNH+h0UxA4BAAAAFC3GHJDXTJBt+AAAAACAgpDj8JGWdljzAQAAAFxZ0tZ85PWMR5ochY+COMUCAAAAoGgqqDyQo/ABAAAAAPlF+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAIWM0+nUhPHjVSo0RP4Ou/5z3306e/ascVix9M7bb8vfYfc+ataooePR0cZhAIAiivABAIXM99/P0TPP/J8SExMlSTNnztCrr75iHAYAQJFD+ABwxToeHa2aNWr4fNKe2aN0WCk1b9ZUTz89QuvWrZPL5TJuqkAdOnjI2KTt27crISHB2FxoxcbGatq0abr55ptUvly4z+tZIjhIDepfo0cfeVjLly83TgUAFGOEDwDIRnx8vLZs2aKJ772n9u3aKrJKZc2aNfMfCyHXXnutLBaLT1urVq0UFBTk01YYOZ1OvTtunCKrVNaQhwZr8aJFOnPmTIYxu3fv1pQpU/TZtGk+fQCA4o3wAQC5FBMTo/vuvVdPDH3ce2lUQep0/fV64403FRgYKEm6665+Gjr0CeOwQsfpdGr4k8M0cuSzcjqdxm4AAAgfAJCe3W5XRERlRURUVtmyZY3dPiZPnqynhj9Z4G+07Xa7hj35pOLOnlNyilOff/GFQkNDjcMKnWVLl2rKlCnGZklS+QoVcvy6AgCKL8IHAKTTt+9ditq3T1H79unI0WNKSk7R7j17NXjwYONQSdK0adO0bOlSY/MVx+PxaObMGXK73T7td93VT8eij+vgwUM+r+uF+ASt37BRXW/t6jMeAFC8ET4A4DIsFosiIyP1/gcfasGCX1SqVCmffrfbrYnvT1RSUpJP+5UmPj5eBw4c8GkLDAzU40OHqkyZMj7tunR259prr1WvXncauwAAxRjhAwBy6PrOnfXyK68am7Vu7doMb7ydTqd+X7hQgwYOzHBHravr1tGLL76Q5fdXbNq0SWGlQn3m/PzzPJ8xAwcM8OnvckNnXbhwQZs3b1aH9u3k77CrfLlwbdmyRS+//JLPWH+HXS+88LzP9tJktu9vvpllHJYjiYmJ2r37b2Nznpw7d06vv/aaIiIqeX+vxo0a6rvvvs124X9iYqJmzpyhm2++SaXDSnnnlwgOUvNmTTVh/HidO3fOOE3K4o5o77z9tiRp48aN6t79dgUG+Hu3d8cd3bVz507jZgAAlxA+ACAXevXqpUaNGvm0nT59WgfThY9t27apWtVI3XprV3355Rc6fNj31rn79u3TG6+/rquvrqsF8+f79OXH+nXrdNONXbR27VpJUnJyslJTU9WzZ68MZ2wWL1qU6RcXrl2zxueWvhUrVlKLFi19xmQmODhYVatWNTZr4IABevutt/K1MP/AwYPq2KG9xox5UTEnT3rbd+zYof53363nn39OqampPnN06VKwP37/XTVrVNd/7rtPixctUnx8vLff6XRqy5YteuaZ/1Nklcr65ptZ8ng8PtvIjMfj0YTx49WmdSv9smCB91Izp9OpBfPnq1HDazVh/PgcbQsArjSEDwDIhTJlyqhN27bGZm3dutX73ykpKT5vcrOSkJCgIQ8PUVRUlLEr1y5cuKAXXnhecXFxxi7VqVNHN954k0/btm3bMuw3KSlJv/z6i0/b7d1vV2RkpE9bZiwWi+66q5+sVt+y4na79dxzo1WlcoTGvfOOYmNjffqzExt7Sg/c/x/t2LHD2OX1/sSJ2rBhg7FZP/zwvbp1u1WnTp0ydmWQkJCge++5R+9NmJBtaJg5c4ZGjnzW2Oxj5MhnteiPP4zNAHDFI3wAQC5VqljJ2JSpkJAQPTVihFasXKWDBw9p3/79+ui/k3y+r+N4dLS+/nq6z7y82LRpk9atW2dsliT5+fnp3nvv9fnukISEBK1ds8Zn3IEDB7Tu0lkTXQoU3W/vnuE7R7LSoWNHDRo0yNgsXbpsatSokapYobweHDRQR48eMQ7JVEJCgvbt26fKlavo/vvvV8+evWS3233GpKSkaNasmT5tO3bs0GOPPpphAXz16tV1//33a+DAgWrYsKFPnyS9/vpr+uuvv4zNPrZt2ya3262GDRtq4MCBuvPO3t7bIqdxu90aP2F8vs74AEBxRPgAgALmcDj0+htv6tDhI3rttdfVvHlzla9QQZUqRWjgwIF6fOhQn/ErV67M0ZmS7Njtdn3030k6e+68kpJTtG37dtWsWVOS1LJVKzVv3txn/C+//uKzUP7PPzfp9OnT3p+bN2+ups2aeX/Ojt1u17vjJ2jMmLHGLh9ffPGF6tSurUmTJmV7lkGXvvdk46ZN+viTyZoxc6Z++fU3ORwOnzE7d+70eQ0nT/4kwxmPl19+Rdu279DHn0zWR/+dpLXr1mvSx5/4hKu4uDhNmzb1sr+XxWLRuHHvau269frov5M0/euvtWXrVu9rnWbN6tXas2ePTxsAXOkIHwBQwOrXr68hQ4YoMDBQTqdTJ0+c0LKlSzVt2jQNffwxLVm82Gf8ubNnlZKS4tOWF2PGjNXAgQMVEBAgi8WiSpUiFBISIkkKDQ3V3Xf39xmffqF8amqqfvzhR5/+7t3vyPX3i9jtdo0cNUpbt21X9+53GLu9nE6nnhj6uN4dN+6yb/T9/Pz0/PMv+PweDRs2zBCkXKmp3u2cOXMmw1mdFi1a6KEhQ2Sz2bxtFotFvXv3Vvv27X3Grl+3LtPL19Lc0KWLBg4a5BNaqlSJ1HPP+y7iP3/+fIb1PgBwpSN8AEAueDwen/UdaWrW+t+n3h6PR8uXL1eH9u1UIjhIlStHqEuXGzTkocH6+OOPvQvC05w8GaPkfN6qt1z58up15+VvW3tL166qmO6SsdOnT+vPPzdJko4cOeLze5UqVUq3dM37d3DUrl1b33z7rY5FH9drr73uDUFGb7315mXXc4SHl1PVqr5rTiwWi0+IMEpIiFd09HGftmbNm2capEqUKKH6DRr4tGX379G+XfsMl1lJUq1atTO0HzuW+R3NAOBKRfgAgFw4ePCgli1b5tMWFBSkKlUuvkH2eDx6d9w43dD5+gwh458UUalShjtaGUVERKhdu3Y+bT/+8KNSU1O1bdtWHTt21NveqnVr1ahRw2dsXpQpU0ZPjRihQ4eP6IUXXjR2Ky4uTr8vXGhs9rLZbLJZsw4amTlx4qTi4s74tF1unc7l+jKTVfCxWq0Z1sckFMDldABQnBA+ACCHPB6Pvpk1y+dNui5dZpX2Rn3t2rUaM8b3TXb//vdoydJlOnz4iC7EJ2jO99/79BeE4ODgDAuxjTJbeL527VodPnxYCw0BoH///goICPBpy4/AwECNGj1aI0eNMnZpy5YtxqZ8KVcuXKVKhfm0HTX8m6W3d2/BrMuIOXnS5zbFMpwRAwAQPgAgRzwej7777lu9+OILxi7de99/vJf0zJ//s8/6jfvuu0+fTp2q1q1bK7xcOdntdu3dszfdbHMZF54fO3ZUixcv1tIlS7xtVapEZjhDkhObNm267N2dLBZLhu9I+ScEBQWrQoXyPm0b1q/P9HtNzp49q+3bt/u0VatWVSVKlvRpS++vv/7K8L0iHo9HS5ct9WkLDAxUuXK+vwcAXOkIHwBwGR6PRzt37tSdvXrpnv79M9y6tdP116tv377en48e8f2EPSg42Ofn2NjYPH9jeEHIbOH519O/8rkrU9dbu6pChYo+Y3Liww8+UIP69fXFF19k+kb/wIEDen/iRGOzrr32WmNTvpQqVUrNW7TwaVu3bp2mTJ7ss7jd4/FoyuTJGW5R3KFjR5UoUcKnLb3vvvtWc+f+5NO2Zs0afTplik9b1apVM/3iRQC4khE+ACCdWbNmqkb16qpRvboiI6sowN+hRg2v1bx5c41DVbNmTX3yySc+C5ntdj+fMZ98/LGef/45bdq0SV9/PV03dL4+0y/EM5Nx4fny5cu9n+T7+fmpT5++GdYu5NThw4f04KCBCi97lSIqVfS+lhGVKqpO7VpasWKFz/irrrpKXW680actvywWix54YECGNTCjRo1Uvavr6qHBD+qhwQ/q2gb1NWrUSJ8x5StUyBDOjNxut+7q21ddbuisRx4eoi43dNZ1HTtkuEPWPffcq6uuusqnDQCudIQPAEjH6XTqyJHDOnLksI5HZ32nojp16mj2nO+9C83TXN+5s8/Pbrdbb7/1llq3aqkH7r9fO3bsUFiY73oEs0VGRur27rcbmyVJTZo0Uf369Y3NeRITE+N9LWNiYozdkqThw5/S1VdfbWzOt4YNG2rkyIzrS/bt26fPPvtMn332mXbv3u3TZ7VaNWHChGwX2qcFs2XLlunTTz/NcAMCXToj9uDgwcZmALjiET4AIBfsdrueGjFCq9esVd26dY3d6tGjpwYOHGhs9ho4cKCeeGKYsdlUFotF/frdneGL+iTp7rv7Z3pL2oIWGBioqdOmafhTT+X5LMvlWCwWPTFsmL6aPj3L2/ymFxFRWb/+tlA9evQ0dmXw1IgR6nT99cZmr6ZNm+rTTz815XUEgKKG8AEA2Shbtqx69Oipr2fM0ImTMXrttdcVbFjLkcZut2vcu+M1fvwEn7MitWvX1vSvv9b7H3yY7V2pzNCwYUN1vO46n7ZSpUqpQ8eOPm25Menjj/Xzz/M1aNAg1a5dO8PzLFu2rDpdf72mffaZjh6LVv/+9/wjwSPNxS8R7KOoffs16eNP1KJFC5/fKTg4WF1uvFEzZs7Utu3b1aFDB5/5WQkrFabZs+doxNNPq2x4uLe9Xr16+nTqVP2xaLEqVYrwmQMAuMjiudxXy15akJf2cLlccjqdsjv8jcMAAEXI2bNn1e3Wrj6Lrfv06atpn30mPz/fdStXsuPR0WrXrp3PN5W/+uprGvH00z7jAKC4c6Yky263y2azyWKxeB+5xZkPALgCrV2zRuvXr/dpu+322wgeAIB/FOEDAK4wZ8+e1bvj3/W57WyzZs3UufMNPuMAAChohA8AKOaOR0era9dbvLeYbdqkiRYvWuQz5tHHHlOZMmV82gAAKGiEDwC4Auz+e7f3FrPp1y/o0h24evfu49MGAMA/gfABAFewnj176fU33sxwZyoAAP4JhA8AuAI1bdpUM2fN0lfTp/N9FAAA03CrXQAAAACXxa12AQAAABQphA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExRrMLHzz/Pk7/Dri43dNaFCxeM3QAAAAD+RXkKHxcuXFCXGzrL32H3eQQG+KvTdR21YP58uVwu4zQAAAAAV7A8hY+suN1urVq1Snfc0V3Dnhgqp9NpHFJsHD58SI89+oju6tuXsywAAABADuQ7fLz66mtKTnEqOcWpmFOxGjRokCTps88+08aNG43Di40tW7Zo8uTJio09ZewCAAAAkIl8h4/0QkJCNPq551SlSqRSUlK0Yvly4xAAAAAAV6gCDR+SFBISqqpVI33aNm3apLBSoQorFaply5ZpyEOD5e+wa8hDg71jdu7cqTvu6K4SwUE+60eWL18uj8fjsz2n06n/fvSRqkZGyt9hV6nQEE0YP14pKSk+45Ru3zVr1NDx6Gifvnfeflv+DrsGDhjg0+5yufTdd9+qcaOG3vUsVSMj9eWXX+h4dLRq1qihnj16SJKWLVumMqXD5O+w6+ef50mSzp07p1GjRioiopJ3fuNGDbVp0yaf/QAAAABXkgIPH8ePH9fOnTslSZUiKvn0eTwevfzyS5o2bZokyelMlSR9//0cNWncSAvmz5ckRURUls1m06pVq3Rjlxv0ySefeLfhdDr1xNDHNWzYE4qOPqawsDCVKFFCzzzzf3r1lVe84/Lq7Nmzuqd/f/W/+27t2LFDwcHBKl+hgqKjj2nJ4iWy2WyqUqWywsLCJElWq1UVK1ZSRERlBQYE6uzZs+rTp7fGvfOO4s6c0W233a7mzZtr165dio4+ZtwdAAAAcMUosPDhcrm0bds2DRzwgGJiYlSjRg21b9/eZ0xiYqK2/PWXFi1eoqTkFH06dap27Nihxx59VG63W6Ofe04xp2IVtW+fYk7FavRzz8ntduuN11/XgQMHJEmLFy3S1KlTZbVa9dX06Yo+fkJHjh7TX1u2FsjC74kT39OcObNVqlQpzZv3s2JPn9HBg4e0b/9+NW/RQmXDw7Vo8RJ9OnWqJKldu3baum2bovbt0/WdO2vjhg1asnixypYtq81/bdF3s2drxcpV2hsVpQYNGhh3BwAAAFwx8h0+Ro8eJX+HXUGBAWrapLHWrFmjoKAgjRv3riIiKhuH6+13xqlt27ayWCySpPk//6xTp06pRYsWeuKJYQoMDJQkBQYGashDQ1SnTh0dO3ZUq1evkiTNmTNbHo9Hd97ZWz169PRup27dunpxzJh0e8q9U6dOafZ330mSxk+YoC433ujdfqVKERoyZIhhRkb+AQGy2WyKjY3V8uXLvbccrlQpQlWq+F6OBgAAAFxJ8h0+goODFRFRWRERlVW7dm09O3Kkdu7cpVu6djUOVVBQkOrXr+/TlnaJVrPmzRUaGurTVzY8XI0bN5EkHT1yVBcuXFBUVJQkqUPHjvLz8/MZHxIS4vNzbh06dEgHDx5U6dKl1aRJU2N3jrRo0UIDBgyQ2+3WkIcGq0zpMI0aNVKxsbHGoQAAAMAVJd/hY9So0Yrat09R+/Zp67btGjv2JZWvUME4LFslS5Y0Nl1WiRLBxqYCY3c45O/vb2zOEbvdronvf6DlK1bqxptuUmJiosa9847qXV1X69evNw4HAAAArhj5Dh/5Vbt2benSXakSExN9+k6fPq1du/63eD0gIEBlylwlSVqzZk2Gu2Bd7m5SsbGndCzd3a48Ho+2bt3qMyYsLEwlQ0J04vhxrV+/zqcvNywWi1q0aKG5c+cp+vgJdb31VsXFxemVV15WUlKScTgAAABwRfjXw0eXG29UUFCQFv72m96fONG7RiIxMVEffviBNm/e7F287ufnp44dO0qSZs6YocWLFnm3s3r1an3w/vven9OUL19OV11VVgkJCZozZ7Z3+3/8/rt++ulHn7GVK1dW11tukSSNfHakNmzY4O07evSIJk2alG70Rbv37NGJEye8P+/atct7KZkklS5dWq1atpIkJScne/cPAAAAXGn+9fDRuHFjvfjixYXizz//nEJDSqpG9eoqe1UZvfrKKxkWr995551q1KiR4uLidMstNyuiUkVFVKqoGzpfr1tv7WbYulShQkXd3v12SdLbb72lShUrKKJSRd11V1+179DBZ6yfn59GjR6tevXq6ciRw2rbprXKlwtXZGQVVa9WTevX/e9sSIUKFRUUFKTj0dFqeG0DRUZW0S8LFigqaq8aNbxWVSMjdWevXrq6bh298MLzkqTevfsoOPifu1wMAAAAKMz+9fBhsVj0xLBh+v2PRWrTpo1cLpeOHDksSbrvvvu0dt16n8XrZcPD9dNPc3X//ffLarUqJiZGZcuW1c8/z1fPXj3Tbfkii8Wil156WY8PHSq73a4zZ86odOnSmvP9D+rQ3jd8SFKVKpFaumy5nhoxQmXDw3XmzBmdPHFCbdq00cBBg7zjGjdurDfefEulSpWS0+nU2bg4lShZUjVq1FTbtm116lSM5s79SQcOHFCbNm30+x+LNMDwZYYAAADAlcTiMS6cMPB4PN6Hy+WS0+mU3ZG3xdgAAAAAih5nSrLsdrtsNpssFov3kVv/+pkPAAAAAFcGwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBQWj8fjMTam5/F4vA+XyyWn0ym7w9/bBwAo+iwWi7EpX6gPAFA8pNUHZ0qy7Ha7bDabLBaL95FbeQof/v4BsvnZlPvdAQAKI48kV6pL7suXhGxZLRbqAwAUI2n1ITk5qUDCR64uu0oLITYbhQUAihOLJJvN5j3O5xb1AQCKp/zWB6Mch4+0nXk8HuUh5AAACjmLxfdYn1PUBwAo3vJaHzKT4/CRJr87BAAUXvk5xudnLgCgcCuoY3yOwkfaztxut9xut7EbAFBMpD/O56TQUB8A4MqQ2/qQlRyFD6W7njc/OwMAFG55OdbnZQ4AoGgpqGN9jsOH0u0UAFA85fU4n9d5AICioaCO89mGD+NOOK0OAMWX8RhvrAHpGfuMcwEAxYfxGG+sATmVbfhIU1CnWgAAhVdejvV5mQMAKFoK6lifo/CR350AAIqenBz7czIGAFC85OfYn6PwkV5+dgYAKNzyc4zPz1wAQOFWUMf4XIcPAAAAAMgLwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYotiFj29/WKA7+j2kJSvWSpJWrd2kbn0GaebseZKk2NNnNODRZ/R/L7ypxMQkw2wAQHFgPPYDAAqHQhk+du/dr773P65ufQZp9MvjlJKSYhyilJQUjX55nLr1GaQBjz6j2NNnJElWq0UOh0MOu904BQBQTMQnJGrWnJ818NFndVvfB9WtzyD1fWCoZs6eaxwKAChECmX4SO/vPfu0d99BY7P27juov/fsMzar1+0365vP31eblk2MXQCAYmDvvoN6bMSL+nLm90pISlLrFo11w3VtVbF8uE7GnDYOBwAUIoU6fJS9qrRsVqvWrN9s7NLi5WvlZ7PpqjKljV0AgGLqTNxZvfvBp4o7e06PPniPpk9+V6OeekTDHnlA419/TkOH/Mc4BQBQiBTq8FG9ahXVrFFV6zb+pdNnznrbT585q63bd6lxw2tU9qownzkzZ89Ttz6DtGrtJp/2rHg8Hs2a87O69xusT6bNUGpqqnEIAKCQWLlmow4dOaY+PW7VzTd0lNV6+TKWmpqqb39YoD7/eVy33zVYb47/WPHxCd7+lJQUffP9z+o3cJi69Rmk+x9+WkuWr5HH4/GOOXDoiJ5+/nXdftdg9eg/RK+N+0inz8RJl2rI8lXr9dATo3Vb3wfVo/8QTZz0ufdy4ZxsHwCuJJc/av/LAvwd6tyxjaJPnNTmrTu87Zu37tDpM2d1fYfWsljy9xRWrN6gmbPnqlP71hpwb2/5+fkZhwAACgGPx6PtO/coNKSk2rRsIovFYhySwYKFS7V56w7173O7IiqW1/LV6/XdT79Il4LJR1Om65s589XlurZ66vFBqhxRUe9N+kyr1l38AGvX7iiNHPuOnM5UDR3yH93du7v+2rpT70ycosTEJG3cvE3vfvipwkqF6omH79d9d/VUQmKiXC53jrYPAFea/L1zN0GjBvVUoVy4/li6SikpKUpJSdEfS1epVs2qqlk90jg8V3bujtJHn05XnVo19OB/+hI8AKAQS0pKVuyZOPn7O1QiOMjYnamqkRF68ZnH1f3WLhr2yAMKDgrUzr+jlJScrN1792vFmg265647NODe3urUvpUeGdhfJUsEa9HS1UpKSdH3835TaMkSeuGZobrhurbqfcctuqlzB/299+J6xAOHjsjpTFXHdi3UuWMb9bjtRv3fE4MVEOCf7fZTXS7jrwsAxV6hDx+lw0LVumUT78LzvfsOas/eA+rcsY0C/P2Nw3Ps+IkYjf9wqkqWCNaTjzyg4BwWMgDAv8Nms8rf36HU1FSlpDiN3ZlqUK+OHA6HJKlM6VIKDg6S2+2Wx+3Rjr/3KikpWVM+n6VufQapW59BenDoKJ0+c1bnL8TrdGycovYf0pFjx3XfQ095x8yZ+6uSk1N0/kK8WjZrpHLhV+mjKdM1ZNhz+m3RcqWmpspisWS7fWcOnwMAFCeFPnxIUoc2zWW327V6/Z9atGy1SoeFqlGDesZhuVIiOEhhpUJ0/kK8zsT9bz0JAKBwcjgcqlYlQmfizmnDn1uN3bmWmnrxzMPQIf/R55Pe9nmMHvGIHA4/ud1u1a5ZTR+/92qGMc0a11flShX0wTtj9djge+V2uzVx0ud64bX3FB+fkO32AwLy/gEaABRVRSJ8VI6oqGuvqaNffl+mxcvXqHXLJiodFmoclislSgTryUcHqGSJYI19833t2h1lHAIAKGSu79BaISEl9PV3c7V+05Z8LdyuVaOq/Gw2bdn+t0JDSqpM6TDvIzSkpEJKllCliuV19NhxnTt33qe/TOkwORwOOZ2pCvB36KbOHfTxe6+qS6d22vX3Xu3asy/b7edkzQoAFDdFInz42Wzq3LGNnClOWSwWNW/cwDgkT8qHl9XwRwdIkt79cKpOnDxlHAIAKESqRkZoyAN3KykpSWPfmKgHHvk/vf3eJ3r3g0/12IgXNXHS58YpWapXp6bq1a2lpSvW6oVXJ+j3JSu1YOESvfzWB9q776AcDoduvqGDUpxOvfTm+/pq1g9atnKdpnw+S598NlOS9MPPC/X2xMlavHyN5l9a3F6yZAmVvap0ttsHgCtRkQgfknRN3VqqVrWy6tSqnu+F5unVqVVdd9zaRdHHT2r8R9N8bsEIACh82rdprvGvP6cmDa/RhQsJWrpynZasWKvk5BTVqVXdODxLgYEBGjl8iG64rq12792vCR9N06dffCuPx62wUiGSpDYtmujZJ4copGQJzZrzs96eOFmr1/2pyMqVJEkVy4drx649Gvf+FH0ybYbCSoXq/4YNVpWIijnaPgBcaSyebM5Zu91u7yMlJUUJCQkKDw83DgMAFANn4s7KbrfLZrPJYrF4H5nxeDzeh8vlktPpVFip/F0SCwAonE6ePKmgoCA5HA5ZrVbvI7dyPwMAAAAA8oDwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUuQ4fFovF2AQAKCbyc4zPz1wAQOFWUMf4PIUPt9ttbAYAFHFutztfxYX6AADFU37rQ3o5Ch/GnSUkJlJgAKAYcbvdSkhM9GkzHvszYxxDfQCA4iWv9SErFo/H4zE2pud2u+XxeOR2u+V0OpWUlCSL1San0ymXy6XU1FS5XC7vOEne/wUAFC5pBcNischqtcpms8nPz082m012u93731ar1WdsZtIf891ut7cmUB8AoOjJrj543C4FBATIbrd7a4TVmqPzGD6yDR9pRSUtfCQnJ8tq85PL5br0SJXbfXFM2ngAQOGVVmCsVqusVotstovFJe1xsf1iYckqeKTxeDw+deJ/tYH6AABFzeXqg9uVKn9/f2/4SP8hVW7kKnykpqYqJSVFVpuft8ikPzOSblL6TQAACot0hSL9J1dpocNms3lDR27CR9ojrS5QHwCgiMmmPrhdqXI4HPLz8zMnfHg8Hm/4sPnZvUUmreCkjQUAFH7pT69bLBZv6DCe8ciusKQ//qevF9QHACiasqoPrlSnN3ykrxe5laPwkVZQXC6XUlJS5Gd3eNspLgBQ9BiLi/GRfkx2jAHE+Eg/BgBQuGVVH1KdKXI4HD7rAnNaJ9LLdfhwOp0+4SNtDACg6MmsyBj7spO+BhA6AKB4MNaHVGeK7Ha7eeHDc+myK6fTKbvDP0OxAQAUPVmFjdwWlKxqAvUBAIomY01wpiR774qY2QdWOZWr8JH+zEdmstkUAKCQyKpgZNWeU1nVgazaAQCFS1Z1IP2ZD9PDh93h7+0DABR9eSkgl0N9AIDiIa0+pJ35+FfDBwAAAIDir6DCR+6/lhAAAAAA8oDwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKfJ1t6tspgIAioi83LHkcqgPAFA8FIpb7fr7B8jmZ1PudwcAKIw8klypLrkvXxKyZbVYqA8AUIyk1Yfk5KQCCR+5uuwqLYTYbBQWAChOLJJsNpv3OJ9b1AcAKJ7yWx+Mchw+0nbm8XiUh5ADACjkLBbfY31OUR8AoHjLa33ITI7DR5r87hAAUHjl5xifn7kAgMKtoI7xOQofaTtzu91yu93GbgBAMZH+OJ+TQkN9AIArQ27rQ1ZyFD6U7nre/OwMAFC45eVYn5c5AICipaCO9TkOH0q3UwBA8ZTX43xe5wEAioaCOs5nGz6MO+G0OgAUX8ZjvLEGpGfsM84FABQfxmO8sQbkVLbhI01BnWoBABReeTnW52UOAKBoKahjfY7CR353AgAoenJy7M/JGABA8ZKfY3+Owkd6+dkZAKBwy88xPj9zAQCFW0Ed43MdPgAAAAAgLwgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTFMrwsXvvfvW9/3F16zPI++g3cJi+/WGBUlNTjcMBAFeAHbv26M77HtUjw1/Qmbiz3naPx6Np07/TbX0f1NwFf/jMAQAULoUyfKS5uk4N/d8Tg3V379sVGOCvL2bM0ffzFhqHXVZKSoo+mz5bw0e9qtjTZ7ztF+IT9O4Hn2rM6+8pMTHJZw4AoPCpXau62rRsqsNHo7Vi9QZv+8FDR/X7kpWqUa2KOrZr6TPn30SdAYCMCnX4CL+qjDq0baG7e9+ul0Y/qdCQktq4eZuSkpONQ7Pkcrm14++9ijt7zqc9OTlZ23buVgIFAQCKBD+bTd1u6qSgwAD9sXSVzp2/II/Ho3m/LlL8hQT16n6zQkqWME7711BnACAj25gxY8YYG9PzeDzeh8vlktPpVHBwsHFYgYo9HafFy1arYoVyatuqqXTpDMbvS1apXPhV6tiupaxWq95+7xNNnPSZGta/WmVKh0mSVq3dpIeHPy+r1SqHw64nnnlJx6JPKD4hUT/MW6jfl6xUQIC/Ro59R/EJiYo5dVrf/jBfm7fuVLtWzWSzWbV42WqNfXOipnzxjeb+skgej0d1alaT1Wr1bj/29Bl99+MCvf/xF6paJUKVIyoYngUAFD1JScmy2WyyWq2yWCzeR3Y8Ho/cbrcCAwKMXQUqNDRER6NPaNNf21W5YgW5XC59Nn22alSPVP/et8tu99OBQ0c07oNP9f7Hn+urWT/ot0XLJYtFNatHymq9+Jnb+fMXNOGjaRr3wRTNmjNP0cdPateeKD3/yrveY/rM2fP07Ji3fI7xu/fu1yPDX9CBg0fUtlVTxZ4+o8dGjNGSFWu1J2q/3hg/SYePHJPFYtFTo1/LtM7Y7X6GZwUAhV98fLzsdrtsNluu6oNRoT7zkSYxKVlzf1mk8xcu6JYuHeVnsxmHZKpCubIaMrC/KlUop5IlS+jhgf310AP9dO01dfXwwP4qWbKEKlUop2EPP6C7e98mP7ufvv1hgSZO+lwN61+t/3tisJo1qq+vZn2f4XKvRctW69YbO+mnmZ+oTcsmPn0AgH9G+rMfP87/XZ9Nn60Up1N9enRVYGCAdu2O0six72hv1AH1uv1mPfrgPSpRIliffvGNZv/0qzwej1JSUvTO+1O0Ys0GXV27pgbff5diTp3WT/N/N+4ux/ZEHZAz1aVZ0ybq6ScGq3bNapnWGbvDbpwKAFeUQh0+lq5cp259Bqn3fY9q7oJFuveuHmrU4GrjsCyVLFlCrZo1UmhoiAID/NWqeSO1bNZIFcqHq1XzRgoM8FdoaIjatmqqRg3q6VTsGS1YuFQ3dGqrYY88oA5tW2jwA/1UtUplrVi9QecvxHu33aRhfbVr0zxPiQ8AkHfVq1VRy+aNtf/gYW3ZvktNGzVQowZXK9Xl0vfzfpPT6dTIpx7WPX3v0C1drtNLo55UpQrl9NsfyxV7Ok5/79mvbTt3q2O7lnpp9LCLY0YPU9NGDYy7yrHSYaHq16ubHA6HJOmqMmGZ1pmcfngGAMVVoQ4faQvO772rh6pXraxPv/hGz70yXvHxCcahBeLQ4aOKPX1Gv/y+TLf1ffDiXbYGPKGo/Qd1/sIFpaSkeMfWqlGVIgIA/4K0sx/BQYEKCPDX7V07y8/PT+fOXVDU/kOqWiVCNatHeseXDgtVjWpVdCE+XqfPxOng4aNKTk5Ry2YN5ed38RIoPz8/nzm5Vb5cuEqFhhibAQAGhTp8pC0479vzVr398rPq2K6ltu34W5u2bDcOLRAul1sej0e9ut+szye97fMY9+oohZUKNU4BAPwLKleqoMgqEQq5dFlTbly49AGWzer7AZLb7fb5GQBQ8Ap1+EjP5XbL6XRKkpKTL56BqFSxvOITEnXoyDHvuPT/nVuVKpZTaEhJ7di1RwH+/ipTOsz7CCsV6l2oCAAofEoEB6pC+XAdPnJMBw8d9bafPXdeh45EK6xUqK4qU1qVIyrIYrFo01/b5PF4JEnx8Qn60/DBVqWKF0PNrt1R3rYTMae8NQgAkHuF+m5XfnY/Bfj7669tO/Xpl99q85YdiqhYXnf3vl1BQYFKTErSyjUbtWtPlCSLFi1bpQULlyo1NVXX1q+r+vVqy2K1aN3Gv7Qn6oDizp7TkWPRqlC+nBwOu1au2aj9Bw8pOSVF+w4cUoN6dRRz6rTWbtisDZu3ymKRDh4+qpmz5ykgwF+VKpTT4aPRWr56vXf7AFCcFPa7XaVJTU3VkhVrlZCYqBuvb6egwEDZbDaFhpTUspXrtWL1BiUlJeto9HFNmjZDhw4f0929b1fD+nVVskSwNvy5VZv+2q6Dh4/q7Nnz+uSzGTp3/oLi4xPUvk1zVY6oILfbrRVrNmpv1AElJSVry7Zd+vrbn5SckqLIypXUtlVTJSYmaeHiFQoKClSn9q187mTlcrkz1JlqkZW52xWAIumKuNvVzr+j9NZ7n+iTz2bqwMEjur5Da7383HBdVaa0JKlJw2vUv+8dSkpK0bTp3+lM3Dn9p19Pn2342Wzqd+dtqlShnBYvX6O1G/6Sn59NJUsE6+7etyk4KEg/zFuoPVEH5O+wa8iAfurT41bFno7Th5O/0oeTv9Tp03EKL1vGZ7sAgMKnaaP6enHkUFUoH65vvp+vj6ZMlzPFqaeHPqiuN14nSQorFaqRTz2sWjWqatXaTfpixvdq36aFOrVv5bOtqlUi9ND9d8lut+ub7+frzy07NPC+PgoKzFnAyqzO2GyFuuwCwD/O4kk755wFt9vtfaSkpCghIUHh4eHGYQCAYuBM3Nkcf7KV2Znxorw27t0Pp2rt+j/18nPDVbtmNWM3AFzRTp48qaCgIDkcDlmtVu8jt3I/AwCAYuZo9Alt3rLDuy4EAPDP4MwHAMDrSjjzkZiYpIkff66SJYJ1dZ2aitp3UH8sXaUL8Qm6r19P9b7jFuMUALjiceYDAIA8sNmsKh0WqqUr12nc+1P004I/FFYqVE89Pki9br/JOBwAUIA48wEA8LoSznwAAHKPMx8AAAAAihTCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwRa7Dh8ViMTYBAIqJ/Bzj8zMXAFC4FdQxPk/hw+12G5sBAEWc2+3OV3GhPgBA8ZTf+pBejsKHcWcJiYkUGAAoRtxutxISE33ajMf+zBjHUB8AoHjJa33IisXj8XiMjem53W55PB653W45nU4lJSXJYrXJ6XTK5XIpNTVVLpfLO06S938BAIVLWsGwWCyyWq2y2Wzy8/OTzWaT3W73/rfVavUZm5n0x3y32+2tCdQHACh6sqsPHrdLAQEBstvt3hphteboPIaPbMNHWlFJCx/Jycmy2vzkcrkuPVLldl8ckzYeAFB4pRUYq9Uqq9Uim+1icUl7XGy/WFiyCh5pPB6PT534X22gPgBAUXO5+uB2pcrf398bPtJ/SJUbuQofqampSklJkdXm5y0y6c+MpJuUfhMAgMIiXaFI/8lVWuiw2Wze0JGb8JH2SKsL1AcAKGKyqQ9uV6ocDof8/PzMCR8ej8cbPmx+dm+RSSs4aWMBAIVf+tPrFovFGzqMZzyyKyzpj//p6wX1AQCKpqzqgyvV6Q0f6etFbuUofKQVFJfLpZSUFPnZHd52igsAFD3G4mJ8pB+THWMAMT7SjwEAFG5Z1YdUZ4ocDofPusCc1on0ch0+nE6nT/hIGwMAKHoyKzLGvuykrwGEDgAoHoz1IdWZIrvdbl748Fy67MrpdMru8M9QbAAARU9WYSO3BSWrmkB9AICiyVgTnCnJ3rsiZvaBVU7lKnykP/ORmWw2BQAoJLIqGFm151RWdSCrdgBA4ZJVHUh/5sP08GF3+Hv7AABFX14KyOVQHwCgeEirD2lnPv7V8AEAAACg+Cuo8JH7ryUEAAAAgDwgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYIp83e0qm6kAgCIiL3csuRzqAwAUD4XiVrv+/gGy+dmU+90BAAojjyRXqkvuy5eEbFktFuoDABQjafUhOTmpQMJHri67SgshNhuFBQCKE4skm83mPc7nFvUBAIqn/NYHoxyHj7SdeTwe5SHkAAAKOYvF91ifU9QHACje8lofMpPj8JEmvzsEABRe+TnG52cuAKBwK6hjfI7CR9rO3G633G63sRsAUEykP87npNBQHwDgypDb+pCVHIUPpbueNz87AwAUbnk51udlDgCgaCmoY32Ow4fS7RQAUDzl9Tif13kAgKKhoI7z2YYP4044rQ4AxZfxGG+sAekZ+4xzAQDFh/EYb6wBOZVt+EhTUKdaAACFV16O9XmZAwAoWgrqWJ+j8JHfnQAAip6cHPtzMgYAULzk59ifo/CRXn52BgAo3PJzjM/PXABA4VZQx/hchw8AAAAAyAvCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAuCKlpKTo0y+/0Z33Pqq7Hhiq7Tv3aNTYdzTw0Wd14uQp43AAQAEo1OEjPiFRs+b8rIGPPqvb+j6obn0Gqe8DQzVz9lzjUADAFWD33v3qe//jevu9T7xtHo9HM76bq259BmnCR9OUmprqMycrc+b+pu/n/qar69bU3X26y+Gwy2azKiDAXzZboS6PAFBkFdqj6959B/XYiBf15czvlZCUpNYtGuuG69qqYvlwnYw5bRwOALhCrVi9Qd98/7Pq16ujB//TV35+fsYhGXg8Hh08dFSlw0L1yMD+uv2WzqpVo6pefm64Phw3VleVKW2cAgAoAIUyfJyJO6t3P/hUcWfP6dEH79H0ye9q1FOPaNgjD2j8689p6JD/GKcAAK5Au3ZH6aNPp6vsVWX05CMPKDg4yDgkU0lJyYo9Eyc/Pz85HHZjNwDgH2LxeDweY2N6brfb+0hJSVFCQoLCw8ONwwrUvF8WadLUr9W/T3fd1aubLBaLcYgkadXaTXpt3Ee6qXN7HTpyTLt279PI4Q+rTcsmOnDoiD6c/KX+3rNfNptVzZtcqyED7lbpsFKSpA1/btXX3/6kvfsOyuPxqHKlCnrogX5q2OBqJSYm6cXX39Op2NN68P679Pn02Tpy7LiqVK6oZ4Y9pLi4c/rvp9N15NhxRVQsrxFDH1TN6pHGXw8AipwzcWdlt9tls9lksVi8j8x4PB7vw+Vyyel0KqxUqHFYgdq9d7+ef+VdNWvcQPf166nnXx2v8xfi9eIzj6tu7RrecW63W0uWr9EXM7/XqdgzKlmyhHredpN6dOuifQcO6/lX3lV8QqJ3fL26tfTssIf0xoSPdSr2tN5++VmVKR2mmbPn6atZP6j3Hbdo9bo/dSburF5+brhq16zmnQsAV4KTJ08qKChIDodDVqvV+8it3M/4h3k8Hm3fuUehISXVpmWTLIteeouWrdatN3bSTzM/UZuWTbRrd5RGjn1HTmeqhg75j+7u3V1/bd2pdyZOUWJiknQpuFSJqKgnHx2g+/r1VOzpMxr/0TRFn4jxbvdM3Fl9OfN7db3xOjVt3ECHj0TrpTfe1weTv9QNndqqU/tWOhp9Qh9O/lLx8QnpfiMAwD/p/IV4jf9omk6fjtMjg/r7BA+Px6Nvf1igiZM+V8P6V+v/nhisZo3q66tZ3+v7eQtVoVxZDRnYX5UqlFPJkiX08MD+urv3bbJf5gzIL38s1+MP/UezPnuf4AEA+VDowkfaqXB/f4dK5PD0eZOG9dWuTXNZLBalulz6ft5vCi1ZQi88M1Q3XNdWve+4RTd17qC/9+7T3n0HJUmPPHiPhj3ygDq1b+Xtj4s7q2PRJ7zbtVqtevA/d+n2rjdoxOODVL1qFZ08Fat+d96mO7vfokcG3aOr69TUyZhYnTp9Jt1vBAD4J+2JOqA9UftVrWqEmlx7jU/f8ZOntGDhUt3Qqa2GPfKAOrRtocEP9FPVKpW1YvUGyWJRq2aNFBoaosAAf7Vq3kiNGtSTn83ms530ulzXVtdcXcvYDADIpUIXPmw2q/z9HUpNTVVKitPYnalaNap6i8a5cxcUtf+Qjhw7rvseekrd+gxStz6DNGfur0pOTtH5C/GSpGPRJzRp6td6bMSLunvgMM2Z+6tSXS4lJ6d4txsaUlJVIipIkvxsNvn7OxRWKkRXX/qELTAwQGXLhCk1NdVnHgDgn3Vt/brq2Laldv4dpcmfz/K5w9Whw0cVe/qMfvl9mfdOif0GPKGo/Qd1/sIFpaTk/nid/swKACDvCl34cDgcqlYlQmfizmnDn1uN3dnyeC6uT6lds5o+fu9VfT7pbZ9Hs8b1tSfqgJ558S1t2bZLt97USS8887h6db/ZuKlMsTgRAP59fjabBt7bW/Xr1dEfS1dp9k+/Km0Jo8vllsfjUa/uN2eoAeNeHfWPr0sBAGSt0IUPSbq+Q2uFhJTQ19/N1fpNW7wFJSdKlghWpYrldfTYcZ07d15lSof5PBwOhzZu3qbz5y/onr536JYu16lOreqKPR1n3BQAoBALDg7Sk488oArlwzVz9tyLl1RJqlSxnEJDSmrHrj0K8Pf3qQFhpULztEASAFAwCuURuGpkhIY8cLeSkpI09o2JeuCR/9Pb732idz/4VI+NeFETJ31unOLlcDh08w0dlOJ06qU339dXs37QspXrNOXzWfrks5mS5F1L8sO837R4+Rq9N+kzbd6yw7AlAEBhVy78Kg1/dIACAgL00afTtWt3lCpVLK+WzRpp599RembMW1qwcIl+X7JSr437SOs3bTFuAgBgokIZPiSpfZvmGv/6c2rS8BpduJCgpSvXacmKtUpOTlGdWtWNw320adFEzz45RCElS2jWnJ/19sTJWr3uT0VWriRJ6nxdW3Xu2EZ79h3QxEmfqWSJYN18QwfjZgAARUCdWtV1x61ddOFCvN79cKpiY89oyIB+6tPjVsWejtOHk7/Sh5O/1OnTcQovW8Y4HQBgokL5PR8AgH9HYf+eDwDAv6PYfs8HAAAAgOKJ8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFLkOHxaLxdgEACgm8nOMz89cAEDhVlDH+DyFD7fbbWwGABRxbrc7X8WF+gAAxVN+60N6OQofxp0lJCZSYACgGHG73UpITPRpMx77M2McQ30AgOIlr/UhKxaPx+MxNqbndrvl8XjkdrvldDqVlJQki9Ump9Mpl8ul1NRUuVwu7zhJ3v8FABQuaQXDYrHIarXKZrPJz89PNptNdrvd+99Wq9VnbGbSH/Pdbre3JlAfAKDoya4+eNwuBQQEyG63e2uE1Zqj8xg+sg0faUUlLXwkJyfLavOTy+W69EiV231xTNp4AEDhlVZgrFarrFaLbLaLxSXtcbH9YmHJKnik8Xg8PnXif7WB+gAARc3l6oPblSp/f39v+Ej/IVVu5Cp8pKamKiUlRVabn7fIpD8zkm5S+k0AAAqLdIUi/SdXaaHDZrN5Q0duwkfaI60uUB8AoIjJpj64XalyOBzy8/MzJ3x4PB5v+LD52b1FJq3gpI0FABR+6U+vWywWb+gwnvHIrrCkP/6nrxfUBwAomrKqD65Upzd8pK8XuZWj8JFWUFwul1JSUuRnd3jbKS4AUPQYi4vxkX5MdowBxPhIPwYAULhlVR9SnSlyOBw+6wJzWifSy3X4cDqdPuEjbQwAoOjJrMgY+7KTvgYQOgCgeDDWh1Rniux2u3nhw3Ppsiun0ym7wz9DsQEAFD1ZhY3cFpSsagL1AQCKJmNNcKYke++KmNkHVjmVq/CR/sxHZrLZFACgkMiqYGTVnlNZ1YGs2gEAhUtWdSD9mQ/Tw4fd4e/tAwAUfXkpIJdDfQCA4iGtPqSd+fhXwwcAAACA4q+gwkfuv5YQAAAAAPKA8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYArCBwAAAABTED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCn+8fBxPDpaNWvUkL/Drp9/nmfsLvSOHj2il14aq1WrVhm7AAAAAORCnsOHx+PRunXrdMcd3VU6rJT8HXb5O+wqHVZKTwx9XBcuXDBOKZJeeP4FvfrKKxrwwP06eeKEsftf4XQ69fPP83R9p+tUIjjI+9pXjYzU00+PKDavPQAAAIqXPIUPj8ejd8eNU/t2bbVg/nzFx8d7++Lj4/Xzz/N14fx5nzlFVaNGjWSxWNSlSxeFlS5t7DbdwYMH1bFDe/Xs0UMrV66U0+n09kVHH9P3c77/R177LVu26J7+/fX4Y48auwAAAIAcyVP4WLt2rcaMeVGS9NBDD+lY9HElpziVlJyiHTt3qfsd3WW15mnThc7jQ4cqKTlF73/woex2u7HbVIcOHVTXW27Wxo0bZbfbNfq557Rz199KSEzS2XPntWTpMt3a7dZ/5LX/7ddf9e233yghIdHYBQAAAORInt6lbt26VSkpKapYsZKeefZZlSlTRpJksVhUo0YNjRv3rsLLlTNOQz44nU49+8yz2rt3r0qVKqWFv/+hF154UdWrV5fNZlNAQIBat26t996byGsPAACAQilP4aNChfKSpFOnYrRn9x5jd5Y8Ho++/fYbVY2MlL/DrnLhZfXfjz6Sy+XyGZeYmKiPPvxQtWrW9K5niIiopFGjRurcuXM+Y995+235O+zqckNn7d27Vw8NflCBAf7yd9jVuFFDLVm8WB6Pp0DGp62l2LRpk8JKhSqsVKhWrVqld8eNU7nwsvJ32HV13Tr6feFC7/w0R48e0cNDHvKu0WjcqKGWLVumgQMGyN9h1ztvv22c4mPr1q1asGC+JOnlV15V69atjUMySExM1FdffammTRp7n2PpsFIaMeIpn0vl0t8U4Pvv5+iN119XYIC/br75Jq1cuVJhpUI1evQoSdJXX30pf4ddYaVCtWnTpnR7AwAAAC4vT+GjZYuWqlevnlJSUtSt26169ZVXMoSCzLw3YYLuveceeeSR3W5XXFychg17QrNnf+cdc/bsWfXocYeefHKYDh06qLJly6ps2bKKOXlS4955R11vuVnHo6N9titJp06d0r339Nf06dNVvnwFWa1W7dixQ7fccnOmd9nK7fjMpKSk6OEhD2n06FEKDAyS1WrVvn371L//3dq8ebN33KFDB3Vjly6aOnWqnE6nyleooIMHD6r3nb20bNkyn21mZePGjUpISFDFipV0661djd2ZWrToDw0cMED79+9Xo0aNVK9ePcXHx+v9iRP11PAnlZqaapyiL774Qi+++ILcbrdcqamy2+2qXr26goODJUl2u10REZVVvXp1ORwO43QAAAAgS3kKH2XDw/XDjz+padOmcjqdeumlsSoXXlZPPjnssiHk8OHD2vzXFh08eEj7DxxU06ZNJUlzZs9RamqqPB6PXnpprBYvWqRq1appw8ZNOnL0mI4cPaYNGzepWrVqWr9+vSZPmWzctHbs2KF27dsr5lSs9h84oKPHotXxuuvkdrs1adIkJSb6rlXI7fjMpKamqlz58jp6LFoHDh7U1m3bVa1aNcXFxWnOnNnSpbM948aN0969e1WzZk3t2LlLBw8eUsypWL3x5ls6cuSwcbOZ2rFjuySpZs0aCg0tZezOVMkSJfX5F18o5lSsVq9Zqz83/6Vpn30mSZq/YIEOH864718WLNAXX36phMQkLfz9D7Vo0UIbN/2pUaNGS5L69r1LUfv2aeOmP1W/fn3jdAAAACBLeQofkhQZGak/Fi3Wx59MVtmyZeV2u/XRhx+qcaNG2rBhg3G4JOmJJ4apbt26kqSyZcuqe/c7JEmxsaeUlJSk6Ohj+unHnyRJL738sho0aOCd26BBAz3xxDBJ0oL583XmzBlvnyTVqVNHT494WoGBgZKk0qVL68UXx8jPz08bN2zQwYMH8zU+M35+fnrxxTEqfekuWDVr1tR1110nSTp65KgkKebkSf36yy+SpBfHjFGNGjUkSTabTXfddZc633CDd3uXc+5s1qEuKx06dtRdd/WTzWbztrVu3UblypfX+XPnMryGkvTk8OHq06evzxwAAACgIOQ5fEhSYGCg7r//fu0/cFCfff65QkJCdOTIYfXq1VNRUVHG4aoSWcXn5/oNfD85P378hE6dilHp0qV17bUNffokqXmLFgoMDNTJkzFKTkry6WvcuInKhof7tFWtGqnw8HJKSkpSQkKCT19ux2fG4XB4w0uamjVr+fx85OhRnThxItPnFBgYqHLhOVscXrt2bUlSTEyMkgzPPSsej0d79uzRSy+N1Z29eqla1aqqd3VdnTh+3DjUq23btrJYLMZmAAAAIN/yFT7S2O129et3t37/Y5FKlSql49HRmjd3rnFYjtkdDvn7+xubL8vPzy/Dm+bY2NM6fz7zMwa5HZ9feXlO6V1T/xpJ0q5du7Rp40ZjdwapqakaPXqU6l9TT6++8oo2b96sNm3aqE+fvqzVAAAAwL+iQMJHmipVqngvK0r/5Xc5Va5cuEqVCtPJEye0Z/duY7d27NihxMREhYeXlX9AgE/f/v37dN7w5Xq7du3U+fPnVa5cOUVUquTTl9vxeRUUFKSAgIBMn9PZs2e1e/ffPm1ZSVvk7/F4vIvxL2fv3r2aNnWqgoKCtHzFSu2NitL0r7/WmLFjC8WXJQIAAODKk6fwMWXKFI175x2dPHHC2+Z0OvXlF1/ozz//lDK5xConypUrr5tuutH7Bjv9pVtbt27Va6++Iknq2bOXwsLC0s2UVq1apVmzZnlv2xsVFaWxY8ZIkm66+eYMl1jldnxeRUZGqmmzZvJ4PHrzzTd09OgR6dLrNXHie1q3bp1xSqbKhofrhRdflNVq1d69e9W0SRNNmzZNsbGxkqSkpCStWLFCQx4arJMnTighIUFJSUkKDg72rklxuVyaOWPGZS+7ys6uXTu9+wQAAAByI0/hI+7MGY0aNVKVK0d4v4ejRHCQnn56hNxut3r1ulNdu95qnJYtPz8/jRo9WvXq1dPevXtV7+q6iqhUURGVKqpZ0ybav3+/evbspQcHDzZOlcPh0BNDH1elihVUrWpV1b/m4jZq1qypp556KsMlVrkdn1eBgYEa/uRw2e12rVixQtWrVVNkZBWFlQrVzBkz1KJFC+OULN1xRw9NnjJFdrtd586d05CHBqtihfLyd9gVGlJSna/vpN9//0Nut1vh4WVVoUIFxcTEqFHDa1WjenVVqlhBK1etVNmyZY2bzlbdqy/eKGDDhg2qVbOGrm1Q3+d2wgAAAEB28hQ+evbqpQcffFCVK//v7IbdbleLFi00c9YsffnVVypRooTPnJyqUiVSS5ct11MjRqhseLhiYmIUExOj2rVr69OpUzV12jSFhoYap6lly5Za8MuvqlGjho4du3inqTvv7K3fFi5UlSqRxuG5Hp8f13furN8W/q5mzZpJkk6eOKHu3e/QbwsXqnbtOsbhWbJYLLrnnnv19+7dGjJkiM/ZmbTX/+133lbZ8HBFRFTW1zNmqlmzZnI6nTp27Kj633OPJkx4TwEBvovkc+Kmm27WM88+K7vdrvj4eO9ZFQAAACCnLJ70X+edCY/H4324XC45nU7ZHXlfOF3Q3nn7bY0ePUodOnTQ9z/8mG3oye34f1J8fLx69eqpxYsW6dVXX9OIp582DgEAAAD+dc6UZNntdtlsNlksFu8jt/J05gMF48CBA/pz0ybp0veYAAAAAMUZ4cMEy5Yu1ddfT/e5A9jRo0c09PHHFBcXp3r16qlJkyY+cwAAAIDihvBhgvMXzuuB++9XSMkSqla1qiIqVVT1atW0YsUKBQUF6bXXXi+wu2sBAAAAhRXhwwTXXnut7rvvPoWGhurYsaOKiYlRWFiYBgwYoG3bt+uWrl2NUwAAAIBip8gvOAcAAADwz2LBOQAAAIAihfABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFIQPAAAAAKYgfAAAAAAwBeEDAAAAgCkIHwAAAABMQfgAAAAAYAqLx+PxGBvT83g83ofL5ZLT6ZTd4e/tAwAAAFA8WSwWSZIzJVl2u102m00Wi8X7yK08hQ9//wDZ/GzK/e4AAAAAFBUeSa5Ul5KTkwokfOTqsqu0EGKzETwAAACA4s4iyWazeXNAfuU4fKTtzOPxKA8hBwAAAEARZLH4ZoH8yHH4SJPfHQIAAAAoWgoqA+QofKTtzO12y+12G7sBAAAAFGPpc0B+gkiOwocMC88BAAAAXDkKKgvkOHwo3U4BAAAAXDkKKgdkGz6MO+GyKwAAAODKYswAxoyQU9mGjzQFdaoFAAAAQNFSUFkgR+EjvzsBAAAAUDzkJxvkKHykl5+dAQAAACh6CioD5Dp8AAAAAEBeED4AAAAAmILwAQAAAMAUhA8AAAAApiB8AAAAADAF4QMAAACAKQgfAAAAAExB+AAAAABgCsIHAAAAAFMQPgAAAACYgvABAAAAwBSEDwAAAACmIHwAAAAAMAXhAwAAAIApCB8AAAAATEH4AAAAAGAKwgcAAAAAUxA+AAAAAJiC8AEAAADAFISPIiIxMUmjxr6jgY8+qxMnTxm7AQAAgEKv0IWPHbv26M77HtUjw1/Qmbiz3naPx6Np07/TbX0f1NwFf/jMuVLYbFYFBPjLZit0/2wAAABAtgrdu9jataqrTcumOnw0WitWb/C2Hzx0VL8vWaka1aqoY7uWPnOKutXrNumR4S9ozfrNxi6vwMAAvfzccH04bqyuKlPa2F0k5OR5AgAAoPgqdOHDz2ZTt5s6KSgwQH8sXaVz5y/I4/Fo3q+LFH8hQb2636yQkiWM04q0g4eP6dCRY3K73cauYuVKeZ4AAADI3P8DeuKS67vt0goAAAAASUVORK5CYII="> |
| Kiwi-Nano | <img style="height: 250px;" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAycAAAItCAYAAAAjRQV7AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAIhMSURBVHhe7d13eBTlwsbhZwMhhSSUUBJCi/QiRUCUDiJYAEURVBQUURQBFQuKoiAWELGi4tEPxYYgigIeDii9Q5DeS2ghoYSWkASS7H5/JBt3J7vJbhoD/O7ryqW8M9tnZ99n3mZ5+o0vbQIAL00a/YSxCAAAIF8shBMAefHwvZ2MRQAAAPlisdlshBMAAAAAl52PsQAAAAAALgfCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMIUrOpzExsaqWrVqslgsbv/69+9vvJnXxo8fn3V/c+bMMW4udHPmzMn2ukqWLKkNGzYYd3V6Tzp06KDExETjLgAAAIApXdHhxBPfffedqlWrptjYWOOmK0L//v3Vo0cPY7GSkpLUvHlzjR8/3rgJAADNmTPH7YWsxMREdejQoUAu4F0J+vfvn+0in/2vMH9H+/fvX+AXCjds2KChQ4cai03B+D4X5GuPjY1V//79C+z+vFGYr8uoMD7fy/ne5cVVH04k6fDhw3rggQfy/KGMGDFCNptNNptN3bt3N24uNHPmzNF3331nLHby5ptvuvzhAQAA/2rfvr0SEhKyfs9tNpuioqL05ptvFmpAKUiJiYl6/vnndf78eeOmy2rDhg0qWbKkJDm9v9WqVVPFihULpJ7y8ssv69ChQ8biQlUUr8tRYX2+l+O9y4+rJpy4OunMnj07a/vSpUu1ePFip9uY3cyZM7P+f9y4cU6vrV+/flJmC8onn3zicCsAAOCJZs2a6fXXX9e8efPyfAHzWmevUPfq1UtTp0512vbZZ5+pRYsWev7556+49/dqfV1XgqsmnLjSvXt3p4DiWNmXQyLObSyHqzEnjrft37+/U5PfHXfckW1/R/Z9XT2WK1WrVs0KI3bDhg1TYGCgJOnQoUNuvxxRUVFOrzGnpkhXY1uMV5PsXQEsFouqVaumf/75x2ncz5XchQ4ArnXG7iuOv2EbNmxQ1apVNXnyZKfflfHjxzv9Jrr6bXP8HTXerxlER0crISFBsbGxatKkiV577bVsr8X4Goy/jzK8fx06dNC5c+eyttnHhDq+dvtvqrF7navPITY2Vg0aNNDSpUuduqw7/i5fjvd38eLFWr9+vYYNG2bcpKCgII0ePVpz585VUFCQZKhH2P8cX/+cOXPUoUMH3X333bJYLGrbtq3atm2r7777TkuXLi2UFgtXvH1dyuUYsX/+kydPdqo32V+7u89XLupnnnzHxo8fn/VeF/V7l2+2K9ixY8dsVatWtUmytW/f3paQkGDcxe0+s2fPtkly+zd79uys+xg3bly28qioKFtgYGC22wUGBtr+97//ZT1mv379su7HlsPzccXxcatWrWo7duyYcZdsHO/f3Z+rx+3Xr1+2/Vztn5CQYGvfvn22fRz/AgMDbVFRUU73DwAoWrNnz3Z7Prafy+2/UfZ/O/7W2H8njb97jvvYfzuMZY7/HjdunNPzsN/PuHHjbEWlX79+Ln/7jNvsv6HG39x+/frl+hpcvW7H31D7fTvWL9x9Do7P1f5Ys2fPdru/Y10jp8+9MIwbN87te2vk6vka30v7cWesP+X0GRYGb16XzYNjxP75O+5j/I65en9cfZ7GY834b/tj2R+7qN+7/LqqW04kKTg4WJGRkZLDlZHExERNnDhRMnQHy6mVJTezZ8+WzWbThQsX1LVrV3Xo0EGStGTJEqeWhKioKB0+fFiS9PzzzzslbqN+/fqpatWqUua4mUqVKrlM4zmxdwdLSEhQ+/btJUnr16/X7t27s/ZxHNvSr1+/rK5j48aNkzK7xH322WdZ+zuqWrWqjh075rR/UlISTZ0AYAL2yVMcr6haLBYFBwdr6dKlWfvZrxL/9ttvCg8PlzJ7H/Tr108TJ050Op8PHjw4a59evXpJkiZNmuRUdurUKR07dkyxsbH6/PPP9frrr6tZs2aSQ1eqzz///LK3tNt//26//Xan32PH17hhwwbNnDlTP//8s9vXYN/H8X14+umns353PbV7924dOHBAEydOzHo+zZo104ULF1yOeU1ISFB0dLTq16+fVda9e3dduHAh67kWth07dhiL3LK/PsfWiDp16qhFixZO9xMYGOiyxaIoefO6PDlG7By/Cx07dlT79u1zrHPOnDlTvXr1cvo8jd+xJUuWOB174eHhOnTokEaMGOFwT1eOqz6cuBIUFKQlS5bIZrNpyZIlWSeASpUqZXWV8kb79u3VsWNHpzL7Cfvw4cOKiorKKrcfgFWrVlXz5s2zyl0JDw/Xb7/95vI5vfzyy7Lk0k3LsTtYUFCQbr/9dinzx+rYsWNSZvOqPahVrVo1K2DIcGI1frmUefJw/CFz3N8YgAAARS8wMFBRUVFOYxaNF6yUWRErV66cKlWq5HT7Xr16ZTufO1aEK1WqpCpVqmS7nV1UVJROnTqlzp07O5V37txZNpst67eoKCxdulTBwcFOIa1Hjx6aPXt2tkqc42u0P0fja+zcubNOnTqlqKgoHTt2TOXKlXP6XQ8KClK1atWcbpObv//+WxaLJdtjuRMeHq4OHTro5ZdfzrE+YBbNmjXT4cOH1axZs6yugMagLMnlsWhmnhwjdo7HliemTp2aNebF3t3PcRZX+3fM+NhXsmsynBjZ++Q1b95cSUlJxs25qlatWrYWkObNm2e1etgDiT3dSlKHDh2yKvU5sV8xMf6Q2C1dulTdunVzeUKKjIxUcHCwsdjJ7t27tX79esnFc3IMNMaQJRcnD3cBCABgfp78ZuSFq9ab5s2bKz4+3rhroXI1cY7Nw1k4jb93Ru6usntbEVUePoepU6dq3LhxTuHLOIalMHnzGh3HmzRv3ly9evVyW7+53Lx5XfLgGMkrx/EmM2fOVFRUlFNPHxXiY18uV304sTd5GjkO3nOV2vPLfjVDDl27HLt02VtWPOXY2mPLnP7Q3qJivKoFAIC37F2fjfJb8XHXelOUXY/yy96Fxig0NFSVKlVyW5F1F1py4u5zyInjkgfjxo3Td99953H37/yqX79+jvUQx/V27N0H7ceDcRYsM/HmdcmDYyQv7L1b7F3u3X1n3D32leqqDyeOgcDeMhAbG6t77rknq5XEfjXFscJfEOwzatlbHewtKK66gRkZZwMzsvdlFK0UAIB8ql+/vssKzsyZM72+ku/IXikz3u+cOXOumNkd3b0Gxy5YlSpVytZ9JzEx0eXaEo6BxXgB1d3n4I0RI0aoX79+eQpGedGxY0e1aNHC5bIG9sp1ixYtVKdOHe3YsSPr/+0ce3CYiTevy5NjJC/sx4fxgrbjGBV3j30lu6rDyYYNG3T//fdn/dv+4ToGlnHjxjmNOylI9kFekjRx4sSsLl3GgXeuVKpUSeXKlZMyV7k3XgHZsGGD3nzzTaeyvHB8jsbB+4mJiZo3b57kZoyMsauX4/6BgYF5/jICAIqWvSJ2zz33OE1f+t133+U6eUtO7BfS7r///qwrzLGxsRoyZIjToHMza9asmXr16uX0Guy/wfbXYN9nyJAhWe/fZ5995tQrwz5Bj+OaKi+//HJWfUQOn4PjpDKxmVPQGusB7rbZB2cbK7SFJSgoSBMnTtTMmTOzXUx9+umntX79+qwB/vXr13dady4xcy2RpKSkHJdFuBy8eV2eHCN5YT9mHCelsH8v7RemXR179u5zxud9pbhqwomrgW6OY0jctVbYryw4fkEKiuMYjKVLl+rw4cMKDAzMNjDQlfDwcA0ePDjr3/YB8K5em6vg4CnjuJKXX345a5vjidXdl8vxy7B48eKs/Y0zSwAAzMvedbhDhw6qlDkz5P3336+oqCiPxmTkZMSIEXr99dezxp1UqlRJgwcPzjYI3cymTp3q9BqaN2+un3/+2ek1TJ061en9mzdvnu66666s7UFBQZo2bZqio6Oz6iv169d3WsfM/jlUq1Ytax/H9ysoKEjPP/+8vvvuO5UsWVLHjh3TmjVr9PnnnzvVD15//fV8f27eaNasmY4fP65Dhw451VUOHTqk48ePZ9UHunfvrnHjxqlHjx6yZHarv/3229WvX79cu7MNGzZM69evV3BwcJGt4+Lp65KHx0hujJ/v7t27sx0zQ4YM0ezZsxUYGJhVh506daoGDx6cdewFBwerWrVqWd3mLsd7ly/GuYWvJJ6s6WGcf93T2znOB53bOifGubgdGddD8Waeaft818bn5vhnnPva8bUZH8vV67DrxzonAAAAuMyumpYTd8aNG6dDhw45XfUPDw/XmjVrsmbTksOAPftUujkNgvKGvbnNzpMuXXb2KyjGWRns+vXr53ZwlLemTp3qcszN7Nmz3XZ7q1q1qvbs2eM0y0bVqlW1b9++AnlOAAAAuLZYbDabzViIgtW/f3999913CgwM1LJly67ointiYqK6deumpUuXqmrVqlqzZo3L7l4AAACAt676lpPLzXH1dcZhAAAAAO4RTgrJ+PHjZXFYxTMwMFDDhg0z7gYAAAAgE+GkCFwN3bkAAACAwsaYEwAAAACmQMsJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBcIJAAAAAFMgnAAAAAAwBYvNZrOdOHFKqWlpSktLU3p6utLT02W1WmWz2SQp678AAAAAUJAsFkvWfy02m8128tRpWa3pSk+3Ki0tTVarVVarVSKYAAAAAChkWQHFZrPZ4k+fzQwk9hYTZYUTEVAAAAAAFBJ7MJE9nJw5e142m03p6ekZQcTenUuEEgAAAACFz6LMbl3nzifKZrM5/YkWEwAAAABFJGvMyfmEC4QSAAAAAJeVxWaz2RISk5wCCeEEAAAAQFHLCieOCCcAAAAAiprFZrPZEi8kS4QSAAAAAJeRUzgBAAAAgMvFx1gAAAAAAJcD4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJgC4QQAAACAKRBOAAAAAJiCxWaz2RIvJBvLAQC46qSmpmn/gYNasWaDNm3eppPxZ3Tm7DlZrdasfXx8fFSmdCmFBAepccN6atmiqRo1rCs/vxJO93U1stlsOhV/Rv9s2qot23drz94DOp+QmO09kqRSIcEKDAxQjchqatSwrpo0aqAqEeHy9S3utB8AeINw4oF3JnymRctWGYuLTMnAAL039hXVqV3DuAkmtWL1eo1+5yNjsVuPPtRLffv0NBZfMZKTk/XK6AnatmO3cZNXmjdtpDGvPudVJfBYbJyee3ms4k+fNW7ySm6fwY/TZ+mbH2Yai90qyvvLy/s/euSzanNzC2PxVetU/Bn9PHO2FixcoaTkJOPmXPn4+KhJo/p66rGHFFm9inHzFc1ms+ngoaP6dfb/tGpNlM4nJBp38Zivr69aNm+i++65Q/Xr1JLFYjHukitvj+eKFcrp4/dGq1xoGeMmAFcgunUBMI3oQ4cVf/q0sThHhw4f05kz54zFgCTJarVq7rxFeuTJ5/X73AV5CibKvJ9/Nm3ToGdG6uPPpyglJcW4yxUp+uARDXnhDQ0a9or+99eSfAUTSUpNTdWK1ev1zItj9PTwUYo+eMS4CwDkiHACwDTOnDmn/dGHjcU52rp9l6w2m7EYUGpqmr75YaY++WKKUlIuGjfnidVq1Zx5C/X62x8q8ULego4ZXEhK1kef/58GPTNSu/fsL5Tv0J590Rr0zEh9/tX3unjxknEzALhEOAFgGlabTTt37TMWu5WSkqJdew8YiwHZbDb98ttcTZ85u1Aq3v9s2qbPv/5eqalpxk2mF33oqIYMf11z5y3KNo6koFmtVv02+396+/1JV3SYA1B0CCcATGXH7n1KTvZsHNzpM2cVcyzWWAxoz94Dmv7b3EIJJnZ/L1yuRUtXGotNbdfufXrx1bd1JOaYcVOhWrVmg94c9zEBBUCuCCcATOVIzDHFnYg3Fru0P/ow402QTVpauqb/NlcXkjwLuXlltdk0579/XzEV7l279+m1sRN19tx546YicSW3NgEoOoQTAKaScD5RRz28qrtz175CvTKOK9ORmFht3rbTWFwoDh6O0f4DB43FpnP6zDlN+PiryxZM7BYvXa2Va6KMxQCQhXACwFSsNpv+2bTNWJxNcnKyduz2fHwKrh2HjhzVuXMJxmKXfCwWdencTt9/9aH+mv2D/pr9g77/6kN16dxOPh5Mg5ty8aKiDx81FpuKzWbTDz//pkNHLv/zTE1N1ZTvf9HxE6eMmwBAIpwAMKNde/YrIfGCsdjJmbPnFBt33FhcaPr26am/5/zo8Z+7NUlQ+GJiPBuH5GOx6JG+vfTisCcUHlZBFotFFotF4WEV9OKwJ/TwA/cYb+LShVyO1ctt87adWrBohbH4sjkWG6fZ//1bNlo9AbhAOAFgOidOxetU/BljsRPGmyC/ylcop1tvaedyoUCLxaI2rW5UqVLBxk1XlNTUNM2aMz/P67L4+vqqU7tW+nD8KP058xv9PedH/TX7B8347nMNHzJQ4WEVjTfxyOJlqxR3/ISxGAAIJ4WlU7tW2a6k5vXvj+lfszo8rho+FouKFStmLHZy7lyC9u7dbyx28s+mbbmONwkqGWgsArKEBJWUv7+fsdgrPhaLrqtezVhsGrv37teGjbl3k3SlVo3qmvL5exr54tO6vn5d+fmVkDKDW9kypXRH146a+uX7emLAg/L19TXePEcnTsZr5dp/jMUAIIvNZrMlXijcGU2udO9M+EyLlq0yFueoU7tWGvni08ZiUzh3PkG/zPqvFi9frZMnTslqsymoZKBq17xOPe7srJbNm8rXt7jxZrlKvJCkTVu2a/W6jdq+Y48SEhN17rxzv29fX1+VKR2ikoGBqle3ppo0qq/GDesrtGxpp/0KSnJysrbt2KuofzZr+849On32nM4nJLpckM3+3Er4llC9OjXUqEE9NWlcX2EVy7u8spqTFavXa/Q7HxmL3Xr0oV7ZugFdvHhJGzdv1+Jlq7R9917Fx59VamqqlPlcy5cLVasbm+qOrh1VpXIlr59jQUpOTtYroydo247dxk3ZlAwMUPVqVbR95x7jJiddOrXVS889aSyWvHi8G5s11roNm43FLrn6DArTj9Nn6ZsfZhqL3crp+Xn6fjgaPfJZtbm5hbH4qvD7nws0afJUY3E2/v7+euv159Xk+vrGTZnjNGZp6k+/Gjc5CS1bWh+OG6VK4WHGTabw8edTNGfeQmNxrho1rKc3XxvuUcC32WxatHSV3v/kq6xzlCfq1rpO48a+ku0xvD2eK1Yop4/fG61yoWWyymw2m+KOn9TSFWu0YtV6xR4/6fR7VCokWDUiq+m2Lu3V6sYb5O/vn7WtoFy8eElbtu3S2vUbtXnbTp09d17nzp13uqji4+OjMqVLKcDfP9+/O96yWq06dOSYVq+JUtSmbYo7fkJnzp7P9hmWCgmWn18JRVarrAZ1a+uGJg1V47rqeaon5MZqtSo27oTWb9yiTZt3aH/0IaWkXMz2vtnZn1tYxQqqX7eWbmjcQA3q1c4K0gXNjPWcqxHhxANmDCfeVGwcn8vK1VF676PJOU6xGRxUUk89/rA6d2gtH5+cG9dSU9O0Nmqjfprxh/btP+jy5OGJ8uXKqvttt+iu7l1VMjDAuNkrNptNm7bu0FffTMvXc7ILD6uoRx7qpY5tb8r1/bDLTzi5kJSs6b/N1W9/zHMZoox8LBbVrVNLrzz/lMLDKhg3FwlvKhMlAwN0712366df/lBaWrpxc5ZaNarrvbdGKjiopHGTog8d1QuvvpXjoOdSpYJ11x236rtpvxk3uZRT5V9efudUxPfnzftvl1M4sdlsmjbjD33740yvvz++vr56Ydjj6tS+VaFXrtzZvHWHXh3zvlIu5v79cVcJX7E6SuM++CLX7lDtWt+okS8MUfHiObcGXg5nz53X8Ffe0uEjMcZNOQotW0YT331VlSuFGze5ZbPZ9OnkbzX7v38bN2Xj4+OjKhGV1KJZI/W5t5vKlC7ltN3b49kxnNhsNu3YvVefffmd9uyLNu7qUmBAoPr1vUf3dO/q8TneHavVqm079+iHn2dp05YdeV7kMiQ4SJ3at9L9vXo4ha6CcPzEKf0y608tWLhCScl5mwbb19dXjRvW08MP9lT9OrXy/V2/kJSs73/6Vf9dsDTPz8nO19dXHdq01CMP91bF8qHGzV4zYz3nape/byGuKEuWr9FbEyblGEwkKSHxglauiVJ6uvuTqs1mU9TGLRow+CWNfucj7dkXnecvrCSdPHVaU374RX36P62fZvyR53nwj5+M1/Mj39aLr76T7+dkFxt3XO++/5mGvTRGsXGF20f6n83b9OQzr+qn6b97FEyUObvVjl179PTwUVq/YYtxsylFVApTqZCc+/Ifiz2uY8fijMWSpKMxx5RwPtFY7CQiPExhFcoZi+GBpSvW6vvpv3v9/fH19dWI557ULR1a57uykh/XRVZTlcqeVay3bNvptPaGzWbTvAWLNe6Dz3MNJv7+/upx562mDCaSdOjwUZ3Iw6xY991zp1fBRJldve687RaFBAcZNymoZKBubnmDnh/yuKZ+OVH/mzVV//f5eD35WN9swSQ/UlJS9MkX3+i5EWM9DiaSlJScpMlf/6B33v8s1888J9EHj2jI869r+MtjM7qd5jGYSNL5hET9PneBHhwwTB9/PiVfz8vuQlKyPvr8//Tw48/p97kL8hUCUlNTFbVxi555cYyeHj7K6wBsZ7VaNXfeIvXp/7Rm/jEvX8/JLjU1VX8tXqHHBr+oX2b9mefPwaz1nGsB4eQaERd3Qt/88Eu25lpXihcvpts7d3DbZHvx4iV99uV3GvnGewU+W1JKykVN+X6GRrw+TvGncx4QbbRr9z49/dxr2lJI6xvs2r1Pz498y6sfPU/ZbNJ/5y/Wq2Pez/N7ej4hUe9O/Ew7du01bjKd8uXKKrJaVWOxkwtJydrpZtyJJ+NNakRWVaAHXVLgbMeuvfrki288Olc48vf304jhT6pD25uMm4pccFBJNb+hsbHYrb8XLtcvv83V8ROnNOqtifpw0v95dHGgS6c2atywnrHYNKIPH/Wo9chRaNnSanVjU2OxR6pViVDTRg1UvWqEeve8U++NfUVzZnyt33/+SmNfe163d+2giEph+W6dcOXU6TN6adR4zZm3MM+V0SXL1+iTL6d6XWm0Wq2aMetPDR4+qsB/H6xWq+bMW6innh2l6EN5nwp65659enzICM2dtyjP7487e/ZFa/DwUZq3YLFXM7Clpqbpw8/+T598McWj75u3UlIu6qtvpumLr3/w+jM1az3nWlHwZwiYjk02zZj1p2LcXIU2atSgnpo2bmAsljL7W456a6J+/3NBrpXD/NiybaeGv/KW4jxsqYiLO6F3P/ii0BcYO3EyXm9PmOTx8/LUH3/+pY8+n+J1hdDofEKi/vPNNCUn59w6drn5lSih+vVqGYuz2blrX7Yfu6TkJO2PPuxU5krjRtnHESBncXEnNP7DyTqfkHOrlFHJwAC9MnywOrS5/MHErtttnTweB2K12TTlh1/U97FntGbdRo/ObY0a1tOAfn0uawtRbvbsOWAsylXtWtepQvnyxmKPFC9eTKNeHqavP3tPTwx4UDc0aaiAgMLvvnL6zDm9Ovo97diV8zg2TyxaslIr13q+SKS9gv31N9Pyff7OyZGYY3r+lbF5uvi0ZMUavTjqHZ04GW/cVGBSUi7qw0n/p2kz/sh2znbFZrPpl9/mav5fSz36vuWV1WbTH3MX6Jff5nr0vGTies61hHBSSBYtW6XO3fvm62/F6vXGu82T7Tv3aOGSlcZil3wsFt16S1uXg8lSU9P0+dffe7RAXkGIORanV0ZPyPWEevHiJX08+RuPw1d+xRyL04+5jJfw1pmz5wrsataOnXu0Ys0GY7Hp1KtdM9fuMHv3H8w22PD4idOKic35sy5VKljVqlQ2FiMH8afP6J2JX3j9PSoZGKCXnn1SrW9ubtx0WVWsUE59e/fwaCFFb7W6qZnLcSpmkpKSorg8dOmqVSMy1++l2aSmpmY7T+RVWlq6Zv72pxIv5N69qKgq2HbnExL12pvva/dez0Pnjl179cnn3xRKy4SR1WbTtz/O1PRf5xg3ZbP+ny156jqaF1abTTP/mOdRq5ZZ6znXGsLJNeDEyfhcx5nY1a4ZqZtvvMFYnHUS/nvhcuOmQnUk5pi+/Wlmjk2ye/dHa+v2/F8x88biZWu0a485Vye32mxatnJtgYanwlC9WmWFhpY1Fjs5deqMYmOdryp5svp3lYhKCquQ/4GQ14rEC0l6e8JnXl95DgkO0msvDTNdMLHr1L61br6pmbE4z3x8fPTQ/T016qVhpg4mkpRy8VKeWpKrVqlkLLrmHDwco/0HDhqLs8nr2Kz8OJ+QqC+n/ORReMprS2h+WG02zfjtzxxbeNLS0vW/v5YUakuT0fmERE3/dU6Ov4tmrudcawgnyOJjseiWTm1d/uju2L1X03+bW6QnYbvFS1dr45btxuIsq9Zs8Hiw4HXVq+j9d151Wkxs5g+T9frLz+i66lWMu7uVkpKi32b/L8cT3eV04OARnTlr7gUKQ4KDVCWXQbdJyUnavd/5KuHmLTuc/u1KZLXKRdKd5Gpgv1Lo7VitkOAgjXntObVo1si4yTR8fYvrqQF9FVHJs+5dOalQPlTj3hyhR/r2cjsez0zS0tJ10cvxJv5+fipTKsRYfM1JuXgx15nCYuNO6P++m16kFWy7bdt36a/FK4zFTlJT0/TNjzO9bgktCOcTEvX9z7/p4sVLxk2SpBMnT+Y6lbydr6+vHux9l6Z986n+mv2D/p7zo/6c+Y2++WKC7r6zi1frFK3bsCXHi4pmrudcawgnyBJRKVztWt1oLNbFi5f0w8+zPG59UeYVxo7tbtYXH72teb9N1d9zftS836bqm8nen1BSU1M1c9Z/XZ7oLl68pAMHjxiLXQotW0avv/KMmlxf32kxsdKlgtWu9Y367IO3dEuH1vL391OF8qGqXClcndq10u1dOuiZpwbo1ReH6JMJo/Xzt5M084fJGjVimGm7PyScv6BTp04bi03Fz6+Erm9Q21iczYaNW7NCYELiBe3a43qQvJ2PxaIbmjQ0FsOFvF4pDAkO0luvv6Dr69c1bjKV1NQ0bdu5R2lpeb8i6WOxqFO7Vvpq0njd0PjKOa4uXbro9esuVsxH/n6en5uvZgcOHnU7RsFms+mXWX96NXujj8WiJtfXd7o4tuCP7/XjlE804KH7XE6Z7o7VZtPsuX8p/vRZ46YsG7ds17KV64zFOapYoZyGDxmomT9MzloE+vdp//H64p0kbdy8XRs3u65sxxw74XE3vHt63KZHH7pP5cuVzRrf5edXQlUqV9KQJ/tr/NiXFRRUUmVKl1KF8qFq1LCeunRup/t79dCrLw7R6JHP6oevPtTP307Srz98oYb16xgfQjJ5PedaRDi5BjVv2kiTP35HC/74XvN+m6r/fDpOPe7orC63tHG5SJC33aZCgoP09usv6tUXh6hWjX8XavL1La4qERknlHfHvORyykl3duzep737s/cXtVrTPZ6NJjysgkLLZH99dr6+xfXK84M195cp+mnKJ/r2y/c18sWn9fzQx9X9jlvUsd3Nql+3lsqFllHpUsGFMhA2OKikXnzmCc2Z8XXGD8PPX+mRvr28Oskps8Uh/oz5ZwFpWL9OrpWhmGPHlXjhgiTpVPwZnTiVc9/c4JAgVY6ga4on8tItpXSpEL3zxouqXzf3CQ0uF6vVqgWLluvBAcM07oMvdDwPYy/sypcP1f339bji1iVISLhQJOMMzMTHYtFNNzbVlM/f04I/vteCP77X5x++pdo1I4275ir+9Bm3LfJxx09ozfqNxmK3fH199ezTj2nC2yOdLo75+PioYvlQPdjnbn06cYxXLXwxx2L1z2bX4yLy0m2qQ9ub9H+fjdcdXTuqdKl/p3kPCiqpdq1v1Kfvv6kundo63SYnaWnpWrBomcveBRcvXXRZ7kq9OjVy/K1tULe2fp/2H/3y/ef6acon+uDd1/TSM4M0sH8fdWx3s9rc3EJhYRVULrSMy7G0dmau51yLCCfXmC6d22nsqOdV87pq8vHxka9vcV1XvYqGPfWoHrjvLuPukqSFS1a6PUkb+VgsGjTgwVy7elxfv64GDXjQ48GqKSkpWhvl2Urf7uw7cEj/bHJ9JccMIiqF6dOJY9S1c/usLklBJQP10P099dzTj3ndSnPocN6nnSwqlcLDVKp0zt1ITpw4lfVa9u7dz3iTArJ5606vpwyuUD5U74x+SXXr1DRuMg37WhPvfTi5QLo2Hj9xSi+99o527XbfHQSXn4/FoocfuEdvvjpcVatEyMfHRz4+PqpdM1LvvTVS9evm3krr6OSpeF1Icv27t3bDZq8GMN/T4zbd3qVDjpXsypXC9dyQxzxeqd5qs2n12g0uK/lHYmK12YtumrVrRmrok4/k+Nh+fiU06LG+XgW9XXv26+SpvF8YkKQ//7fQ4/pHflwt9ZyrBeHkGhJatowevK+HV/2lPelG4yiiUrjH6wvc0OR6lfdikbztO/dkO3n4+/srtKxnq+empKRozLsf6YmhL+un6b9rx669OnsuwW3TfVHysVj04H093C581qrlDapb27wVwrwqU7pUrt0FHPt/7/TgWGS8iWdmzZnv1UDZCuVDNXrkc15VToqSffHEoS++4dGsPN44e+68Ro6Z4HKR078Wr1Cf/k/r3fc/06Jlq3Ty1GlTnFOuNTVrVFePO291uYZKUMlA3dfzdo8riTlJS0vXho1bjcVu2Vd6zymY2NWpGama11UzFrvlajZDeXgRx1GnDq1zXRRXkkqFBKu1FxNMnDp1WgcPZV+cMbRMGQUGZB/b6sr6f7aqd7+hWd+vuLgTBd71yez1nGtR9m8xrlo33dhUER7O+28XG3dCscdPGovdui6yqsqW8WzFX08qpo7ijp/Q+YSM7j12FotF1zd03YfUFavNpgMHj2jKD79o2Iuj1euhJ3Vrj4d09/2P65FBL+idCZ9lBZeiXCukTJlSur6B+9cREBCgyGpX39S4xYsXU0MPrmju2ntAZ86ey/UHhPEmhefSpVQVL+75hY2iZLPZNP3XOfrws8JZzE2Zg3zHjPvIaaG5tLR0rV67QfGnz2rh0lV6Z8JneuDRoere+zENGT5KP03/XfujDxd4ZcoTZcqUVkkXk5tcrZpcXz/HCnbliEoKDvG8i407584n6KAXiyFWjghXeJhnldOAgADV96JV8ty5RJ1w0WVx01bPW00CAwLVoI7nXTQ96YprZ7XZtNfFrGfh4RVUrpxnFxWV2U3Z/v166PHndGevR9XlrofVp/8QDX/lLX38+RQtWrZKx0/G52lKfrPXc65FhJNrRPHixdTmphYeXb1xdOLkKSUmev5FWbpijW7t8VC2NVtc/d3Ws59Wr/3HeBduXbiQpDNnsg8AbNmssSqUz183nsQLSTp6LFaLlq3KCi7dew/UPQ8O0vuffKXtu/bk6aTnqYhK4Sqbw3gYSSoX6vnJ/EpSt06NXH/sDhw8oh079+b6A1KmTCnViMx55Xnkzdlz5zVz1p8uu5FcbktXrNXUn34r1O+oHBaam/L9L0pNTdOhIzHa5GL2uJSUi9q194Cm/PCLBg17RQOefkkni3iCCh8fH6/P9+npVo/H8JmJj8Wi6xvkPDlDcFDJXM8znjh1Kl7nXbRWuLNj11716P14tt8/d38zZv1pvAu3XI0tTE5O1jEvBuonJSdp6ItvZHse7v6eH/m2V8fI0aOxxiKVLhWi1jflb/pxq9Wq+NNntGXbTs2Zt1DvTPhMfQcM0209++upZ1/VnP8udNmq5MqVUM+51hBOrhGlQoIVUamCsThXMTFFPw2hOxeSknUyPvsPfHhYRfW4o3OBNNkbnU9I1P/+WqJnXhyj/oNe0MrVUYXSZaNc2TI59ve9mlWOqKQyuVyFio8/rYVLV+b6AxIeVlFlSud8X8i7xcvXuB2Ee7kcOHhEk76c6tXYmSoRlfT5h2+p/4P3en3esNpsmvbLHxoz7mP98ecCj7rG1akZWeTHpZ9fCa8G4yqzC2VCLt8xMwoI8Pf4SnZ+nT5zTsnJ5ul2YxxbmJR8UfGnzTMZyolT8S57IXS//RZVKYSJS6xWq/buP6iPv5ii+x4erImffpXrej9XQj3nWkM4KSSd2rXKmoovr39tbm5hvNs8K1umtIKD3Td5u5OW7t1UlJfLvXfdoc63eD6TSF7Exh3XmHc/0rgPvijwPqElS167YyRKhQSrei5d1tLS0rVqbe6r3tevU5PxJoUoNTVVP834w2Vl43JIS0vXzFl/5lr5cNSoQV19POEN1a4ZqYfu76knHusrX19f4265WrPuH/13/mJjcTY+FotubtnM6wkt8quEr69KheQ82YQrxsqut6IPHdWTz4zU5K9/0IGDR4pkYbnivsVVogBaRTyRbk33ana7omaz2Qq9BbEgVCgfqheffcLrAO0Nq9WqeQuWaODTIxS1Mft4MbsrpZ5zLSGcXCNKhYSoRB5+gM3G3Q+nr29xDR74sFp5MVgvL6w2mxYuWanX3/7QoxV6PXW1dtnyRPHixdSs6fXG4mxy607kY7GoXl3P+2sjb3bs3KP/ebkuSmE5dCTGqyld69etrTdHPZ9VIbJYLLq3x20aOqh/ngKKJ8qVK6uG9Tzv019Q/PxKKDysvLE4Vzmt7+GJvXv3a9+BQ5r5xzw9MfRl9egzUC+++o4WLVtVoOdMR/5+fl6tE3I1OXT4mLHIVHKa9ax+3Vp6fujjhT5N99lz5zX6nY+0ZMUa4yZTclfPuZYQTq4RIcFBOc7xfTUIKhmo0a88q5eee7LQu1D8s2mbZs3+X75+xPGvyKqV890fnPEmRcNqs+m3P+Z5tQBdYVmzboNH3aqUGYIfvK+HggyDxC0Wi+7o2lGvvTikUCpJLZo1VlhF77vUFoR6tWsYi3IVfeiIx331jdLS0rV8TZRTWWpqqjZu2a53Jnymex54Qn0HPqvJX/+g6ENHr4gr/ChcrW9urv9MGq/WNzd3OdNaQUlJuaj/mzrdFOct5K7wjgSYSl67FBQvZs7Zedzx8fFRl05t9dOUT/Txe2/otls7FFqz8R9//qVoL2ZtgXvVqlZWBS+mW3TFk0kFUDBi407ol1l/XtZwfvHiJa8WTasUHqa6ddxX1lvf3Fxvv/GiSpfyviuUOyHBQepxx61eD0wvKLVq1VAphwX1PHEsNk67duc8K547+w8c1JYcZoqy2mw6fvykZv4xT48PGaEefQZq1pz5xt1MrZhPMa/HKRUli8VSqJX8wlCxfKjGjHxO0775VM88NUC1alQvlNfg7rx1pdVzrgUWm81mS7xgjv7DZvXOhIz5tb3RqV0rjXzxaWNxgflx+ix988NMY7Fbjz7US3379DQW52rF6vUa/c5HxmK3Cvt151VycrJ27t6vfzZv16bN23T0WFyBdDF44L679Fi/3sZir983Tz6fovrM8yI5OVmvjJ6QtR5JTkoGBui9sa+ojsNVXZvNprfem6Sl+Wh2793zTj0x4EGnMm8+h9zer4J+/wvy/rx5/90pFRKspOQUjweWlwwM0DtjXlIDD6aCLgxnz53X8Ffe0uEj2ddRcKVh/Tp6d/SLuY5Jij50VG+++7GOxOS/u0yPOzpr6JOPXLZwktfjokWzRhr9ynNetbanpqbprQmfauVq55aTnPhYLBr18jC1bXWjU7m3z7tihXL6+L3ROXaPPRV/Rs+8NFrHXUy964q7+9y9Z79eGvWuLiR5Vm/y9LgrKN6+d67Ox2ZgtVoVG3dCW7buUtSmrdq1Z59Onjqd79a2CuVDNfGdVxUeVjGrzJvfCZm4nnM1KfhoiqtK+dCyXnV1iDtxUknJ+a/0F7SAgADd0KShBvbvo0kfjNXvP3+lP2d+o2++mKDnhzyu9m1uytNsL3v3RV+WNQyuNt6uV2PkyVSicK9Rg7r64qO31aRRPeMmty4kJeuHn2ddtuM/LS1dF72Y0vR8QqIuXso9eEVWq6z33xmpRg09fy9cCS1bRvfcddtlCybKPO+1aeX9xCob/tmqX3//b7YrzO7YbDb98ttcrV6T+6QVjsqVK+vVooNmEBpaVkFejG85cfKUEhKL7jcxICBAFcp5PrX+haRkxZ7IeYr2y8HHx0cRlcJ0e9cOGjViqH78v481//fvNPOHyXpv7Cu6v1cP1akV6fVYsdNnzupozHGnsqulnnM1IZwgRxUrlldoaFljsVsHDh7RgegjxmJT8vMroSqVK2Wd/KZP/UyTP3rHqwWTTpyKV3IBz9x1rapXq4ZXPxCOypQppWpVC35aymvBDU0a6s1Rz6tC+VD1ube7V1Nab9y8XStWrzcWm5I33ZVCy5bRu6Nf0t13dslzF54zZ85qxm9/FvjMft5q3fIGr9eBstps+vanX/XNjzNzDZ9Wq1W//P5ffT/9d69nsbq+Qd3LNh4nr4KDSno1Be6pU6e1eZv7rm6FoX597yZgWLZiba4TjpiBxWJR6VLBWRcaP/vgLc36abL6P3ivxyElLS1dMXHOUwdfzfWcKxXhBDkqFRKc48rlRikpKZozb6FH00cuWb5G3XoN0H0PD9aDA4Zp+Ctv6b2Pv9QXX32vvxev0LKV63Tk6DGdij+T6/oWBcFisahmjWoaMXywx+NULl68eEWc1K8E3v5AOIqsVlWhZfN222vZDU0a6vWXn8kaJN6gbm01a9rQuJtbaWnp+mnGbJ0+c864qdAFBvoptKz7bjxGaWnp+umX2R535/TzK6GnHn9Id9zW0bjJI1abTf+dv1iPDx2pnbv2GTcXmbCKFdSxXStjca6sVqt+mv67Bgx+Ub//ucBp9W2r1arjJ+P1+58L1H/QC/rPlJ887g5o5+vrq1s6tL6sLUt54edXQs2bNTIWu2W12TTnvws9Ou527Nqrex4cpHv7PqkHBwzTkOGj9N7HX+rjz6dowd9LtXjZau2PPqxT8Wd09lyC25at6+vX9fg3TJI2bNyq3ftyD+6JF5I0/JW31O2+AXpwwDA9MugFvTPhM7338Zf6bfb/tHjZav2zaZtOxZ9R/OmzHtUD8svf318P3d9TXbxYSuCCoT5xLdVzrhSEE+TIYrHo5hY3eDWgfvHSVfpu2m9uv7g2m02Llq3S+5/8RykXL+rM2XM6cTJeW7bt1IK/l+nX2f/TuA++0JvjPtajT72o+x8ZorsfeEI/Tp9lvCtZrVadPnNOp+LP6J9N27R42WrN+e9CTfz0K43/4As9PuRlPThgmO6+/3H16T9ER49lX63WKDiopPz9PZs5qny5UJUM9PxKM9wrFRKsWjWqG4s9cl31Kl71j4fU6qZmTsFEmVNy9+ze1avWk0NHjurP/y10W1EqLAH+AQoP8+6q+45dezRy9IRcF6mzWq1at2GzBj79subOW2Tc7JXYuON6ZsQYTfnhl1xbIQqDxWJRt9s6ef1e2R0/cUqTJk9V3wHD1OWuh9W5e191ueth9R0wTJMmT1VsnHMXGU9dX7+OGuez69zl0rRRA68q/zt27dFHn/1fjq1o23bs1utvfaDzCYk6dz5BJ07Ga9feA1rw9zLNmbdQ7338H709YZIGDXtF9z8yRL0eelITPvrS5cWxypXCVPM6z8+lF5KSNW7iFzlO8HI+IVFj3vlIW7btVErKRZ04Ga+jx2K1aNkqLfh7mT7/6nu9PWGSXhr1ru5/ZIj69H9ajz09wuUYn+TkZJ2KP6MjR49p2cp1Wrxstb6eOl3vffylXh3znh4cMEx9+g9Rl7se1lff/pzrucVisXh1oaJaVed1tcxez7kWEU6Qq8bX11Xd2p6vH2FfQfmZl0ZrzbqNWQu2paamafvOPXp1zASNe/9zpaR43l+8WpXKur1LJ6ey2Ljjemjgs+rdb7Duf2SIXhr1rt6eMEkffzFF8xYs0V+LVyj60BGdOBmvxAtJij99Rl989X2OV7BsNpv+XrRcp055tkJrxQrlvKrIwT2LxaJmjRsYi3NVvHgxNW3s+dV+ZOjSqW22aXWVh9YTXaaZ6ywWi5rf0Mjrblc7du3Ro0++pP/7boZijsVltQakpqbpSMwxfffTr+rTf4hGjn6vQAbFy6EVYugLryv6YNF3BwkPq6BH+vbyqvJVmHx9fdWr5x1X7AWFalUidFOLpsbiHC1ZvkaPDx2pvxav0NlzGVM1W61WRR86qvc++lLDX3nLq8VES5cKUa+ed7r8TP38SuiO2zq63OZObNwJDX3hdX09dbrT9yL+9Fn9Ovt/6v/EcG3cst14M7d8LBb17NFVFQ2zMP7fdzPUvfdA3f/IED361It6c9zHenvCJP08c7YW/L1Ma6M268TJeMWfPiOr1ao//vxLK3MZyxQXd0ILl6w0FrtUMjBA5V200Ju1nnOtIpwgVwEBAXqw910e9+m027MvWq+NfV/dew9U5+59dfs9/fXMS2O0bsNmr/omFy9eTA/27pFtwHr5cuVU18sZRtZGbdZTz76mvxav0Okz57KuyFy8eEl79kXr1TET9O2PMz16fsWLF1PHtjdfcd0SzKxa1cpejzspFRKsiEp5uyqM7Hx9i+u+u+/wKnSfPXde03+d4/YqYmFp1uR6VakcYSzOVVJykqb98of6D3o+qzXg9nv669EnX9R3037TmbOF003twMEjeuHVt7V3/0HjpkLXrnVLderQ2lh8WXTu2Fo3XMEXFIoXL6ZePe/0etrp2LjjGv/BF+r10JNZLVCPDxmhBQuXeT0L1V133qrIas4tAI5ubnGDmnp5sScl5aJ+njnb6XvRp//T+uKr75XgZZejunVq6daObYzFatm8sVfnlpSUFL057mO999GX2h99OKv10Waz6fSZc5rz34V66rnXFHPMeRyJO3Vq1VD1atnHlZq1nnOtIpzAIzc0bqjOHS/PD1unDq3VrnVLY7GKFy+m227t4PXJxP4D0bvfYN3a4yF17t5Xd/Z6VIOfe82rE0rNyGpq6EU/VeSuUqUwVQr/d4pHTzDepODVrV1TN3rRr16Slq1c59WV1YJQtkwpPdi7h1dXiC8nX19fDX3ykTx3X8wPX9/iGjzw4XzPQpZf9evW1qABfa+Yz8yd66pXUa+7b/e65a4gNGpYTz175DwTnJ9fCT3S9z6vup8VlJDgID352IMuW2Zr1YjU9Q28m37carVqwcJlGjTsFd3Z61F17t5Xt/Z4SL37DdbHX0zxODj5WCy69Za2blvszFjPuVYRTuCR4sWLadCAvkX+w9aoYT0NHviwfH1dL5LUtFEDtWvtPEd+UfD19VWfXt1dnnyRd8FBJRXp4qpWTurXq+X2xwZ5U7x4Md3T4zavrnCmpqbq2x9+yfPq4nllphaBnPhYLHq4z91q3+byVUCCSgbqpWeeUESlMOOmIhFRKUwvPffEVXPevPeuO9TZi4HYBSGiUpheesaz97BOres07KlHvb6Alx++vr4a9tSjql/X9Yxhfn4l9ND9Pb1uIS8IjRrWU5ubmhmLs5i1nnMtIpzAY0X9w1atSuVcT8K+vsX1aN9eRfac5FDJaHOz9+sHIHdNrvf8h6F48WKq50U/YXiubu2aauflGhn79h/UomWrjcWFyt4icEOTou8mFB5WIVufenc639JW993TLcer3UUhLKyCJrw1sshbb8LDKmj0yOdUuVK4cdMVq6hbo0qXCtErw59SmBeTG7Rv01IP97m7SFp4fHx81P/Be3IN4PXr1FKfe7oVyXOyi6gUpheGPZ7rYphmrOdciwgn8EpYWAWNGzOi0H/YGjWsp/feetmjk3BYWAW9/cYLqlbFff/bguLj46NH+vbS/ff1uOyVjKtVrVo1VKpUsLHYpbJlSqtaVe/HHCB3xYsX01133urVFU6rzaaff5nt0ax4BSmoZKDefPU53dKhdZFUePz9/TTgofv09aTxmvL5BA146L4cZ/gz25XRCuVD9eG4Ubqja8cieb8aNaynj8a/nuMYiStVUMlAjR75rG668QbjpgJVJaKSJrz9qurW8e5ijMVi0f339dDARx8o1BYUf38/Pff0APW5t3uuv4325/RI317y8Sn8ami1KpX19hsveFSfkEnrOdeawj8qcNUJD6ugD8eNUvfbbynwE4v9R//d0S95NTVg5Urh+nD8KLW+uXmh/diGh1XUuDdH6ME+dxf468a/yoWW8XiF46qVI1QqxLMgA+/VuK66Wt/U3Fico/jTZ/TbH//LdfrPgubv768Rzz2pF54dpGAvVvD2hr+/n+7pcZumT/1MD/a5W35+JeTnV0IP9rlbn04Yo9o1I4038aobTlHy9/fXc08/pvFvveLVwrPeCA4qqZeee1Lvvz3Sq/P5lSYkOEhvvPyMBjzcO8eQmhc+Pj7qdnsnTfrgzTyHOx8fH/XueafefuMFhYd5N6bPE7VrRurTCWN0e5eOuQYTOx8fHz3Q+y69/vIzhfZ99fHxUffbb9FnH4zxusXOjPWca0nBvuO4Zvj7++uZwQM09cv31eWWdvk+IYcEB+nubl307eSJWT/63goJDtKYkc/pq8/GqVO7VgVylcjHYlHtmpEaM/I5Tf3y/St6hpkrRXBQSY9nYbu+Qe08HSvwTPHixXTv3Xd4Pah2waIVRb4qtjIrI106tdXP307S80MHqnKl8HxfrPDx8VGd2jU08sWnNfP7zzX48YddtiZFVq+iD8e97tSKEhIcpBHPPWnaK6MWi0VNGzXQl5+8q68+HadbOrTO97ncx2JR5Urhen7oQP387SR16dS2wCt3ZuTrW1wP9r5LP/7fx+p11+1ef2eM/P39dEuH1vpq0rt6dvBjLo85b93QuKGmfP6eXh7+VL6/G/bvxZiRz2nSxDcVmYeAa7FY1Obm5lnf14IKToEBgbq7Wxf9NOUTPTN4gFdj5xyZsZ5zrbDYbDZb4oWM+ZmBvMpYIyBWa9Zu0PZdexR96KiSkpKzrSniY7GoVKkQlQwMVL06NdSoQT01ur6uwsMqFPgPmNVqVWzcCW3Zukvbdu3Rnr0HdCEpSecTEl3OPe7v76eQ4CCFVaygWtdVU/MbGqth/Vq59lEFYF6JF5K0ddsubdqyXdt37tHps+d05uz5bCua+/r6qkzpjHNT7VrXqW6tGmpQv7aqRIR73R3r+Ml4ffl/P6hdm5bq0OYm42ZTs583d+7epy3bd2nnrn26kJSU43tWtnQp1aoZqWZNr1fD+nVV2sNumVcz+/u4fuMWbdq8Q/ujDykpKdnlhBGlQoIVGBigGpHV1KhhXTVp1CBPx523zp5L0LYdu7Ruw2bt3LVP5xMSdebsuWzTGjv+NtavW0s3NG6genVqFMpvY3Jysnbu3q9/Nm9X9MFDij50VKmpaTp37ny2mTQd6xO1a0aqfv1aatG0UaHUJ2TSes7VinACAAAAwBSIcAAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMwWKz2WyJF5KN5QBw2V28eEmjxk7UP5u3GTd5pF3rGzXyhSEqXryYcVOevDPhMy1atspY7Nbokc+qzc0tjMVZCvr+CtruPfv10qh3dSGp4H8jSoUEy8+vhCKrVVaDurV1U8tmqhIRLl/f4sZdUciSk5P1yugJ2rZjt3GTWxUrlNPH741WudAyxk0AkC+0nAAwrYOHjmj33v3GYo9t37lHJ06eNBbDBM6dT9CJk/FaG7VZU374RU8MfVk9+gzUOxM+06EjR427AwCuEYQTAKa1Ys2GfF21jz99VqvWbTQWw6RSU1O1aNkqPf70y3pt7Ps6fjLeuAsA4CpHOAFgSgmJFxT1z2ZjsdfWrd+kixcvGYthYlabTWvWbdQTQ0Zo+ap1stlsxl0AAFcpwgkAU9q1e7+iDx0xFnttx+592rs/2liMK8CFpGSNHf+ppv86h4ACANcIwgkA07HZbFqxZr3S0tKNm7yWkpKitVH5b4HB5WG1WjXluxmaNuMPAgoAXAMIJwBM51T8GUVt3GoszrOVa6J09tx5YzGuEFabTd9P/10rVq83bgIAXGUIJwBMZ9PWHTp54pSxOM+OxcZp1+68z/qFyy81NVVTvvtFJxgkDwBXNcIJAFNJS0vXspVrZS3ALjxpaelavf4fugVd4Y7EHNPPM2fzOQLAVYxwAsBUDh2J0fYde4zF+bZ+w2bFHT9hLMYVZtnKdYo+xDool1tIcJD8/EoYiwEg3wgnAExl45btOp+QaCx2qWRggHx9fY3FLp06dVrbdu41FqOAlQwM0GcT39Tfc350+psz42t988UEDX78YYWHVTTezGNnz53X4mWrjcXIB39/f4WW9W6l94CAAPkWL2YsBoB8I5wAMI3k5GStWOX5oOeePW5TjepVjMUuWW02LVy8kjVPLpOAgABVqVxJ9/S4Td/9Z6LGvTlCFcqHGnfzSH4nODh7LkGr1/6jL776XkOGj9KDA4ap230D1Ll732x/Xe56WH36D9GDA4Zp+Ctv6ePPp2jRslU6eer0VdO9zGKxKCS4pLE4R+XKlpG/v7+xGADyzWKz2WyJF/K+AjMAFJRtO3br5TfeU0pKinFTNiUDA/Te2Fe0Ys0GTfvlD+Nml0KCgzTh7VdVI7KqcZNH3pnwmRYtW2Usdmv0yGfV5uYWxuIsBX1/BW33nv16adS7upDk+W+E/XOpU7uGcVM2cXEnNOKN8Yo5FmfclKPixYtp7GsvqEWzRsZNbqWkpGju/MX6Y+5fio07btycJ2VKl9J999ypHrffkmNF/eLFSxo1dqL+2bzNuMmtdq1v1MgXhqi4h60Tm7fu0Ktj3lfKxYvGTS49cN9deqxf76x/T585R19N/dlpn5w8+lAv9e3T01gMAPlGywkA01i1ZoNHwUSSKoVXVKVKYWrZvHGOFUNH5xMStXHLdmMxLpOwsAp6+ol+HnfNs0tLS9fOPfuMxS7ZbDat27BZDw18TpO//qHAgokknTl7Tv+Z8pMeGvicojZuMW7O4udXQrfe0lY+Fotxk1u790XrzNlzxmK3tu3Y7XEwCQkOUoe2NzmVRUSEOf07NxER4cYiACgQhBMAphB/+qxWr9toLHarbu0aCg4qqerVqqh6lUrGzW4tWbpKiReSjMW4TBo3rKfr69cxFudqx869uXbRs9lsmv7rHL325vv56gaWm7PnzmvU2A+0ZPka46YsNzRuqIhKnlfoT544pV0eBrCLFy9p63bPJ5GoeV11Va7kHEZCy5RRYECgU5k7/n5+KlMqxFgMAAWCcALAFPbui1bscc+uahcvXkytWjaXJAWVDFTTJtcbd3Hr4JFjOnjoiLEYl4mfXwk1vr6usThXMbFxOYZMm82maTP+0JTvZshqtRo3F7jU1FR98sU32rHL9aQLoWVLq00rz7vkWW02RW3c6tG4lvjTpxV96LCx2K3OHVplm2mrVKkgBQR4NvuWn38JhYQQTgAUDsIJgMsuLS1dCxYtU1paunGTS6GhZVW9WuWsf3vTtSslJUULl6w0FuMyali/jvz9/IzFOUpIvKD402eMxVl27t6nmX/MK9D1cnJzPiFR3//8m9sWnQ5tb1JIcJCx2K2t23fr3PkEY3E2+6MP68wZz7qAVSgfqusbZg+Dfn7+KlHCs3Di7+en4CDvBtADgKcIJwAuu5OnTmnXHs9XcK9TM1JlSpfK+re3Xbs2bdmp+NNnjcW4TMqXK6uSJQOMxTlKS01TcrLrgfqpqWma/ttcj6ekLkgbN2/Xlm27jMWSpGpVItSkUX1jsVvHYuO0d99BY7ETm82mFaujPA5h1zeoq7CKFYzFCgzw83g64fLlQlUy0LOLAQDgLcIJgMtu7YbNOnEy3ljs1k3NmzjNYhRUMlCNGtZz2icnscePa+++aGMxLhNvrtrbpVy8qITEC8ZiSdKB6EPatPnyTHyQlpauVWujjMVSZnfELp3aeTwDV1paujbmMsPXufMJ2rs/5wBjV7x4MXVse7MsLgbmBwQEqEI5z6Z2DmUaYQCFiHAC4LK6ePGSVq3eYCx2q1SpYNWqlX2a2pYtmnjcNcjbbmQoXN5ctXeU5GbMSdTGLV5NfyxJ11WvopEvPq0Z332uv2b/oL/n/Kh5v03VFx+9reZNPZ+yWJJ27dnvNjg1bFBHNSOrGYvd2rhlu9v7kqRDh4/qxIlTxmKXIqtVUYP6tY3FWcqFevYZVCwf6jLgAEBBIJwAuKyOHovTvgOeXfmVpBrVq6lSWPZuKddFVlOVyp7PhrRrz36dPOVZpQ7mdDL+tLHI65mrJKnPvd312QdvqVO7VipbplRWxdvXt7hq1aiuV14YrNo1I403c+vEqXidinc9HiaoZKA6tG9lLHbrSEycDh0+aizO4s0Uws1vaJzjWJEKFcsZi1zydD8AyAvCCYDLasnyNV6NDWh8fd1sMw1JUnBQSTVt1MBY7NaJk/Fau2GzsRhXuHPnE3T4aIyx2K3aNSPV+5475etb3LgpS6mQYDW53vOxIqmX0pWS7H69ntYtb1CF8p51oUpJSdHmrTuMxZKXQaxkYIDa3NTMWOyknIetV57uBwB5QTgBcNkkXkjSxk1bjcVu+fv5qWEOa2J407VLklat3uB2ZiVcmSqUD9UPX3+kGd99rp+/naT3xr6iV18cogEP3acundupzc0tFB5WQRXKh8rX11edOrRWqZBg491k482A/aTkJMWfcd1yIklhFSuoRbPGxmK31v+z1eXg/2NxJ7T3gGdjp+rUqqHq1aoYi52UDy2rkoE5v86SgQEqH1rWWAwABYZwAuCy2blrn/ZFHzIWu1Wlcriuy6G/vrddu/YdOKijx+KMxbhClHTTRcnHx0dly5RSudAyuqFJQ3Vsd7Me7HO3XnpmkEaPfFbff/Whfpryieb99q163XW78eZZrFar9h04pPc/+UrTf/2vcXOeWSwW3dqxjceDyg8fidGxuJPGYu3du1/nzuU+1bAktbq5mcsWR0dlypRWyZI5L8RYsmSgypQpbSwGgAJDOAFwWdhsNq1Ys96rQen2VeHd8bZr1/mExBxX9UbRsMmWp4USC7p7kc1m0+kz57Ri9XqNefdj9egzUE8+M1L/+2uJkpJdD77Pq1o1IlW/Tk1jsUvnExK1dYfz9MQ2m00bPJyRrEL5ULX0oKWmRAnfXANTqZBgBQTkvA8A5AfhBMBlcSr+jKI2et6lS5LmzFuozt375vg3Y9afxpvlaOOmrTmuNI7Cl5R0MccFFQtLYuIF/bNpm76eOl1Dho9S996PqXe/wRr9zkdavmqdUlI8G2ieF35+JXTrLW3l4+GsV2vWbnTqgujNFMJ1a9dQ+XK5D2L39yuh0qVyXvm9VEiISvj6GosBoMAQTgBcFpu27tBJD6dALUz7og9p5659xmIUodNnzno99a+3Yx+sVquiDx3VT9N/15Dho9TtvgG6+4En9NKod/XzzNnatfdAoYYRV25o3FARlTzrhhh96LDiT/87O5mnUwh7s7aKv79/rq1RIcFBuXYPA4D8IJwAKHJpaelatnKtx6taF6a0tHStWLNeNhM8l2vV8RMn3a5Z4o6/v5+Cg9138bM7fuKUJn05VXffP0iPDxmhKT/8clmCiCuhZUurTasWxmKX4k+f1bbtu7P+HbVxm0dTCIdXrKhaXkyDXDmXMVvVqlYyFgFAgSKcAChyR2JitWPXXmPxZRO1cavbdSlQuGw2m6I2bvU6qJYtU1rBwe5n2UpJSdHHn0/Rw48/p9/nLijwMSMF5Y4u7T2eVnhN1CalpaUrOTlZW7c7j0Fxp02rFgot6/kA9n4P3KO/5/zo9q9vn57GmwBAgSKcAChyGzZt9XiWoaJw8sQpbXKzlgQKV0xsnNas22gszlWl8DAFuZlZKv70Gb00arzmzFuYp4H2ypzxq1qVyurd805Nen+MHrq/cCrl3kwrvHtftM6cPae4E/E6EnPMuDkbf39/tWzu2X0DgFkQTgAUqeTkZK1Ytd5YfFlZbTatXrvBq5nDkH+pqWn6ftqsPA2Gv75hnayV3B2lpqbpP99M045dni1OaOfr66tGDepqyJP99c3kCfpz5jf6v8/H64kBD6punZry9WDMRl54M63wyROntGvPPo+nEK5fp6Zq1fC8SxcAmAHhBECR2h99WPsOeL62SVHZvG2njsTEGotRSFJSUvT+J//R4qWrjJtyVapUsBo3dL1i+8Yt27Vs5TpjsVvBQSX10jNPaPb0r/XBuFG6+84uqhJRKccV4wuap9MKW202rduwWVEbtxk3uXRjiyYeD17/cfqsbDPf5fT34/RZxrsAgAJBOAFQpFat2aCUlBRj8WV37lyCNnixWj28Y7VadfrMOe3YtVeTv/5BDz8+XAuXrPR6rIkkNW5YT1Uisg/cttlsWrhkpVJTU42bXCoZGKC33nhBXTq3zzGMpKWla9feA8biAuPNtMLLV63Xun82G4uzCS1bWq1ubGosBgDTI5wAKDLxp89qdR7GFxSVFavWKznZuylt4exCUrKefv71bFfau9z1sHr3G6xhL47WzD/m6czZc8abesTf31897rzV5dS4iReSdPhIjLHYrcqVK6lalcrG4my27dytjZsLd0ySp9MKJyZeUGLiBWNxNg3q1VaF8uWNxQBgeoQTAEVm775oxR4/biw2jX0HDml/9GFjMUykXasWalivjrFYknTx4iWdT0g0Frt19OgxHTpy1FicxWazafmqdXrjrQ8KvbXPm2mFc+Njsahju5tdBjgAMDvCCYAikZaWrgWLlpl60HlKSorWRuXeZQaXR3hYBT38wD0FVum+kJSst9+bpCUr1mS1mNlsNp0+c06Llq3SoGGvaMy7H3u9QGRedWh7k0KCg4zFXouoFK76dWsbiwHgikA4AVAkTp46pV179huLc1SxYnlN++bTbGstePrXu+edxrvM1co1UTp77ryxGJeZr6+vHuvXR+FhFYybslgsFvn4ePezduJkvN4a/6m69x6ozt376tYeD6l3v8F6Z8JnOnDwiHH3QlWtSoSaNHI90N8bTRrV82ptEwAwE+/O4gCQR1u37dKJk/HG4hxdV72KypQuZSz2WMsWTeTv52csztGx2Djt2u1diELh8vHxUf8H71H7Ni2Nm5yUCglW9Wq5jyExq+LFi6lLp3b5ahny9/fXLR1aG4sB4IpBOAFQ6C5evKS/l3g/ZWyzptfnq6JWrWplVahQzlico7S0dC1evlq2PMwihYLn6+urJwY8qD73dne5romj4sWLqV3rlh7NeuWtkoEB+QrKnmrYoI5qRlYzFnusepVKql6tirEYAK4YhBMAhe7osTjtO3DQWJyjnNay8FSpkGDVqlHdWJyrrdt3Ke74CWMxiljpUiEaO2q4et11e67BxO7mG29Q3Tq1jMX54uvrq+eGDFSbm5sbN7l16LD7gfY5CSoZqA7tWxmLPdahfSsFlQw0FgPAFYNwAqDQLVm+xqtZlCSpSkQlhVUINRZ7xWKxqM3Nzb2+kn7q1Glt27nXWIwi4uPjo263d9LU/3yg5k0bGTfnKKhkoJ587MECGVguSf7+fnrx2SfUvk1L1atdw7jZrV17D+R58ofWLW9QhfLeH/ulSgWrWZPrjcUAcEUhnAAoVIkXkrQxD4sb1q9TUwEBAcZir9WIrKoyZbzrjmO12bRw8UpdvHjJuAmFqHy5sur3wD36aconenbwYyoZmLfPv37dWhr35st5quA7Cg+rqAljR6pTu1ayWCyqVauGSpUKNu7m0vYde3TIizVXHIVVrKAWzRobi3PlbnFKALiSEE4AFKqdu/ZpX/QhY3GOihcvpqaNGxqL8yS0bFlFVqtqLM7Vnn0HdPRYnLEYBcDX11cVyoeqUcN6ur9XD7039hXNmfG1pn3zqfo9eK/KhZYx3sRrtWtG6qtJ49X99lvk6+tr3JyjMqVL6cnH+urrSeNUr27NrPIqEeFq3LCe077unE9I1JLla4zFHrFYLLq1Yxv5+/sbN7nlY7Ho5pbN8jVGCwDMwGKz2WyJF4pmDncAAIpaSkqKVq37RytXb9DuffsVH39WqampWdtLhQQrvGJ5NWncULd0aK1qVSp5PSVxQdsffVgvvvq2x90hq0RU0vvvvMoUwgCueIQTAABM5v++m6Fpv/xhLHbrzts66dnBAzyeOAAAzOryXhoCAABOjh6L1YKFy4zFbvn7+alTu5sJJgCuCoQTAABMIvFCkr746nvFnz5j3ORW9aoRqnGd91NmA4AZEU4AACgCO3bt1Vvvfar5fy9VXNwJp9ngLl68pGUr12n4y29qbdRmp9vlhrVNAFxNGHMCAEAR2L1nv14a9a4uJBXcb26F8qGa+M6rCg+raNwEAFckWk4AACgCoaFlFRRU0licL7d0aEMwAXBVIZwAAFAE/PxKFNjK9ZJUKTxM3W7rZCwGgCsa4QQAgCLgW7yYAgLytuq9UfHixdT/wXtUsUI54yYAuKIRTgAAKAL+/v4qV7aMsThPOnVorXatWxqLAeCKRzgBAKCIVK4cbizyio+Pj3rddbueG/yYfH2LGzcDwBWPcAIAQBHx8/U1FnnEx2JRndo19OH4URr0WF+CCYCrFlMJAwBQRE6eOq3NW3do775onTl7Xnv2RetS6iWdOXteqampWfv5+vqqTOkQRVarrJtaNFOrm5optGxpp/sCgKsR4QQAAACAKdCtCwAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApEE4AAAAAmALhBAAAAIApWGw2my3xQrKxHLjmxcwerednxRuLXaiv4d88pRuNxWaz4QvdP2mHInq+qok9woxbr1zp57Xtzz91ssUD6hieWZb5WlsP+VRDmxn2x7/O7dCsuXG6sW8nRRi3FbBLJ1bpy5UlNbRnY+OmIpSq6AWT9cFve3TyoiTfxhrxn4FqatwN+RSnWa+9rekxV8i5Met8L/V5Z7R62s8jHrvyXi9gZrScALkIrFpbHds2zuGvjsobb4QiE/PnB3pr1m6dNW5ALuI0a8IXmr4zybihEETpyxHTtPJ4qnFD0do7U2On7dHJUo310JMPaMiQ29TAuA8A4LIinAC5KNPsPg0aMDCHv06KNN7IjJo9pZ+/+fTqajUBvHD20GElSbqx10B1a9lKbRpVVgnjTigAYer51qf6+ZppRbjWXi9QuAgnAIBrwoWkjC7MvvzyAYBpMeYEcMM+5sTzMRrJWvzhS/pyi1Sr72iN7RyatSVpwxcaMmmHkhr11pTn2iowdq6eHzlf6jlUT2q+PvlvRh/4EkFhatL9Pj3ZpbYCne5bOrtrvqbMWKhN0cm6JKlEhcrq2GugHm3x7+Nk9X2u/IAm1tygcTP26KTVV5G3P6kx1Reqn6sxJ+d2aO6Ps/TbpjglpUryC1Gttndq2P2tVL7Yv7utmzxUHxztqokvXqd1U37UrO3ndSldKlGhtno+2l8964b8u7MkpcZpxYypmr78aEb/fjf3m20/3wBFNrlFAx7pqlrGN8GJvZ+3Y1lmn2/7mJPBb6tj7FR9mfn+yi9ETe/oq0F31ldpx+egVJ1cO1Of/LZee0+kSvJVYGQt3fPAI+pWK8BxR5fWTR6qD9bW1/AJjbXx89+1MjpZl4r5qvx1LfTwEw/oxnIOO+cwHibrfuxXYDd8ofsnHVefkb11durXmh+TqhLl6uuJEU+pTbl8vHeZx5/TW9eyv35+snnG/2ceE3N3xutsYkZXrBJBoap2U5dsn1+249K4X+brdeR0DHp4/OXk5JYZmvJzlLbHOnw37umvvi3DMltGovTpo1O10nA7V5+Bk1ObNf37WZq/M975ufVupfK+/+6W7XPLkvm4Du+t/Xs09oEETf50lWIu+qp8i556fXBb191Ds47l0Wq5dbImr8l4n1x+/wvqeLHfz1tDFbHka32fuX+JoDC17vekBrUIUfTSr/WfmXsVnZgq+YXpxr79Nbxt5cw7cDEGIy/HfZ4fPyfJ2rvgW02Zk3HbEuVqq+cTA3XjzvEuxpxknhf+2KhDJ5J1KV1SMV+VrlpL3ZzODe5eb/bP4r5mx/Xj/Hg1HfCeRrQ1nFvOLdKoZ2cppv1TmvJIfedtwDWk2OjRo0dfSk0zlgPXvITdS7RgV7JC6rVT1zpBxs0u+CqycbD2z9+u7duOKbRzS0X6SkpapU/fW6yDvvU1/JV7Vd1XUuIeLVi4X2eObtWC7ecV0f42Pdiltvyjt2rl6tX662SEbm8WJnvdLGbe23rhP//osC1Mne+5U7c3j1D6gW1atniRNvg0Uuc69mCQqF2Llmv7kf1asSVNN/S6W11rSKH1blXD1CjNXHfS+fXEzNUrr/6sZcfSVb3DbXqwSxNFWg9pzYp1mrvJqpvb1VZI5lXmmKh5Wn38gvb+tUTbSjRTz/va6OYqVsVs2aXVy1Yroe6tamqvhCdF6dMXP9Wvu5JV5oZb9WCPZqqZtk/LVm7Qwj3+6tgmUv72/V7+VL/utO93o+qFnNSW1VGat+SQqnZooQiHSqCzYgopV04V0g9ryzE/3fjAPerZpqHqVQ6Vf2zGa43fuVoLd6ap4Z3d1fOmytKxvdqwfq0WJVTXXY3tVcFkrftqlF6fFa3EkLrq1auzOtYrqRNb/tGSv1cotkobtazk9klI9vcmJlGbV2/UoWK11bNXZ7WJOK8t67dq8d8b5d+inWoHZ+6c+dyq3niHWlZydT/ldfPdLTIGqMdGaea6WEVv/EdxYTep/72NVKZEVbW/qYr88/PelQhUhcpllHpgj2IC6uuh/p3V7vqaqlo2UDq9SG+N+F5/H/NRzVtvUe9OzXRjrTJKid2v7Vs2a6nDe3dp09caOnG5jgVkvOaON1VXyPE9Wrdho+YdDNYdN1eTr38phV1XXGeijuh0jVYa0qeNmtetrvAQX6+OP3diZr+tZ7/ZodPB1+nO7l3U2f7dWLJEK85XV4fG5eUrP5UND1eE5bC2HLuoWl0e0IO3Xq8mNaoo1F32jJmrV177RavjA9Ww6y3q1cnhua09qbqtm6hC5vub7XPLckzr/tisI5WbqFfzjA87JmqeVh86prWr4lSpyz3q3SxYflVvUusqbtJk1rG8VssOFVPTbt3V86aySti2TatXLNEGvxvUuVbm97mgjpfYKM1cd1ZJ+5Zp3oFAte1xp26/vpgOb92vLWv/0f6j6/TD8mS17H6nbr8+RCf37dT2tf84nAMyz0MJxmPZm+M+P4/vTrI2fvWGxs6L1aWIm9Tn/ja6MeSw/vh2vo4UL6a441LDWzqoXuZ3dduPr+iVGdFKKd9Ad/XopI431Vek/xnt37Zf61buVHCbNqoZoBxeb/bPouttFbR7wR5tTy+jHjdXyzrHS9LZZTP03bZktbnvITWv4LABuMbkctoHEDPrbd3/6FC3f59ucNg5sK2G9q8tpe/R999F6ZKSte7bmVqXFKDWAx7RjYb6R1Ki1PqpMRrbt6vatOyqQe+8rUGNpKRVMzTrcOZOpxdp8q9xuhTRSRMnjtCjXVqpTftuGv7O2xrewlfRs77QdPu+dolpavnMaA3q0kodewxUz7qG7ZKkeM2fOl/RqaHqOea9zOfQVj0Hj9bkwfUVeHi+xv0e7XyT03E63qC/PnjtAXVr2Uodezylic80V6CStXzdv1fHt834SSsTfXXj4Lc1cXA3dWzZVj2HvK13bw/RpV1zNX175n4zZ2jlOcf9Wqlb3xGaPKaTIhJ36IOvVunSv49uEKCIRq3UtLJvRjC8vpXatKyt0g57JClSwz8areE92ma8Z2MHqmOglLR2vbbZd9o1Q5NXJSuwxQOa/M5T6tm+ldp0eUBjJ45Qz4hkrfzia6246HCnbiUrqWxXvTs24z463jtck19tq/LpcfphRk6vIzepSqrYVeNe6K2OLbvq0b6tVDq/751fZTVt2ViRAZICqqhpy1ZqUyPjCnzMqvU6IF91fO5tvXZvV7Vp2UptuvTWiLFPqmtZKWnZKm3MvJtNqzYrSfU1JPM1t2nZVY++NkaDGvmqxKFt+idRUtnaatOylipIUrlaatOylZpGBOTt+DM6PEPjZsVJVbvq3bFD1cf+3Rg7WoMaSScXf6vpuyQpVLVa2o8VqULtVmrTspVqlTXeoV20pn80X9EKU583RmvEvf8+t08eqS2diNIHM51bg7ySlKyIXiP02r1t1abLA3q0lWPrp2tJyaXU5w37sdxbr00crm7lpOhffzEcnwV1vJxX9OkGem1C5jmny0C9+3BtSee1cVOIBo1/NbP8Ab07vK1KG84B+VcIj394jqZkftcnvZ55Drt3uCaPaa4TWwwzM6ZGaeX6NKlGN33w+sDM47uteg4YoQ8ejpTSj2rJ2jjn22Tj4rMo21wta0jaskkrnT63eK1Yd1QKbKzWzNKAaxzhBMhFbrN11Tb0xQhsOzAjYKydoS+/+1aT16cqsGVvPdbMxSXaRt01yKk8QB3vbqvSOq/5SzJ+aGNWLNPedKl1z56KcOrmEqAbb79JpXVei9cYK3G11NRlIHEQu1oL9kpq2U19DPPIBra4TZ3KSidXrZfzPYeoU9fmzl3O6tZSQ0lJSfZZnzZr8YpUKaKT+rRwfs2RbW9WhG9x7dm1R0qP0uJlyS73U8Rt6tZI0qYNhh9w75Rue5tzICxWX02vl5R0PqNbi6R1S6KUpFB179nK+XUVq6yeXTOC5sp1nnR9DVG3R7o5f0aR3XVPI0mblmvxOYdyL0U0aOAUugrzvYvoNkJT/vOBBhkrSMVqq0EtSempcp5z67i27z3v8O8AdXzuA3330VNqk1ODY56OP2fblqzRSVfve7GQzO9RsuYv3+ywwUO7lmn+Kal0lwfV0/DcSrfvpm6GkOa9UDVskHsgcZTtuRSLVO8eGcfn/OWO73/BHS+l27ZTQ4fWtxI1IzNaBW5opo6OX5aqlVXb6RxQMAr68aPXbNZJhajT7Ybvuv09cOTbXIM++kA/v9bV+b2UFFi7tiIkXfJg8rlsn4VC1eamypL2aJ3jeeV0lNbul8p3aqeGjrsD1yDCCZCL3Gbr6lrVeIsAdRzUW02LJWvl4h1KCmqsoY8aKvSZImrUyj5bUOR1aiApaf8BnZQUcyxeUohS41dpxVrD3+HzKinpbMxR56ueZUNz769/7IhiJJW2xme/37XRuuAn6fRxHXSqsISovPFqc7ES8pWk0+czpvONPaKD6ZIqh2VfOyO8myb+5z1NvLe2dCIuYz//BEVne/yNOpHuKylOMceMd+K58mXdVQDjdfa0Mu7/qCQF6MJh43NYpXWnUlVCUnRsbldIJamyamebti1AtWuESjqqPTnVsnNRoZzhdRTBe6eL53Vy/2atWDtf06d8oVEjX9Kktc67NGnVWIGK1/zxr+r+J1/VqClztXj70Yy++bnJ0/HnKE4H96dmvO/ZvoP/fo90KONxvHEy+qiSJNWOzPaBSopU7VqS0o/qYKxxm6dCVSHH7kfZNaiR/bnYK+t7o50ProI6Xkr6u7igIiki3PCG288BBaxgH/+89uw77+Z4sX9PXUlV0rmj2rZ2lRbPnqYvP3xbT44xjNfKQbbPQlLptq3UVNLGqKis8/bZqE3aqxC1aZH9cwauNYQToDAENlfr6zN/Lq+rozp+xh0yRITnMNA+XbqUVXk+r3XTpmnSZMPft5szfiTjz+mk420DArOHHoOYmOOSpLPr52a/38mztDhWDpV4uxCVKeX4bwdJSbpgLMtJZuVU+1e5ePxpmrU9VdJ5nXF6fO9UKGMYpJ/NUR2NyfjvXBfPYdLsaF2SdPa0B4txhrsIY448qbC7ERhoqKQV5nuXHq91U0ar35OvauhbX2vS5LmatS5aaeWbq0UN511LNBmoSWN6q2utEJVIO6+9y+fry/fHq9+gl/TW7D3K6Tp23o4/Z6n29zS3IO6lS6kZl8QLb1av4irp5pzgWqgq5HCqULrzJfyCOl6qR7h50GLFjSWFomAfP0lZDSseHi9Je+fqrWeHa8Cz4/XW5Gn6ctZ6rT1ZXE1bOXcfzUm2z0KS/JqrdRPHrl1xWrzsqBRxs9pkC07AtafQTr3AtSxp/beasilVJXx9pS2zNHm9625BMa6uyKdfyug2ExiokgpTRCVJqqxHP/pUP3/j5u+tbjlXjF2ICK8oSarV9+3s95f1l5fVkj0UllGZD+yYsf6Kuz/jzD4FK0wVwjPC5GsuHjvrzz6LVU5i41xfTU2XJF8FlTRuMEqWxxMnFuJ7t+3H8fpgebyKN+mmsePf1ndff6qfJ7+nd5/rroYuXkNg1bZ6dOTb+u7LD/TpawP1UOfKKu+TrG2zPtWnq9y/oII4/nztlcx8BD9XSvhmXFhItRq3eOlico4BzXPxOuHiVCFrRigJDHTVLuugEI+X/PPiuM+XQGW9TZ4cL6cX6d3x87UtMUxdh4zQ5M8+0M/ffKAp74zQoLYZx27eBejGFrX/7doVG6UVMVKtdjd7fR4HrkaEE6CgJa3S5G8zunM9N66/Wgelat2332qdi1pKzLGjxiLp8FHtkVS6ZqRKSypfLiSjW9A+446SomfoySde0tBpuQwEdaVcqEpL2rtvj3GLpGj98PxwDXhx2r8Dxz1VIUzVi0k66qKynr5KHwwcric/X66kcuVUQVLSngPZ91OyFn84VP2e/VTzc7hynn+hqlBeUlK09rroonNp+ae6/8mXNH6xBy0nOq6YbPeRrD0H4yVFqrqht0ZS5pob/7K3knmg0N67HVq/NllSfT05pKtqVQhRiayrzMbnt1nfPP+S+r2zKKM7XzFfla/RWN36jtCngzKmQT1xKoeBNvk+/sJUvYavpL3avte4TVL0AW2XpGpVvK7wlY+srEBJ23a6+l5Fa8/ejDFJ1Z2CU5ouGLugHYvNccyMNw7GZE8nlw4cVYykOi67nzkotOPFe/k67vMlRLVrhrg9XjK6z/7r7D/rtTddirizvx5tVlmlA//tOHbp6PGMYz4fSrS8RR0DM7p2xWzdrhhVVpuW2buAAdciwglQoOyzc/nqxn591bRsYz3Wr74Ck3Zo8rersl9FjZqvWY61hfTzWvz7cp1VqLq2zahwRLbNuJq2ctYMRTv23kiP09xpy3U2NU0NG+VhTvzIduoaIWntXE0/7Nwt5OSCGZp7OlVq0Nj7wZnFmqtjG18pZpl+2+BcEUlatUHr0lNVvmZ9Bfo1V8eWvlLMIk1f6zyg99L2Gfp+i3SpXG21NI5xceNSnq5yB6h1u/oKVLzmzFqls45XVFN36JsZe6SLoWrQ2JNKQ7zm/BHl9Blf2vu7ftsiBba8Wa3t3XjKZlTKd+93rrYmbfifFnlaOSzI9y7z6nuGAPn6K+NqtqGinXFMyKESXlu1qibr0t5lmm+o8SYlZnzu2a7oWx1GRhXA8deww00qr1TNnzZXMY6fXdb3yFcdWzV22OChuu3UtZyUtGyW8/dT0tmlczX3tBTYJmPcgCSVLxuSEVqcLiAka938qHxXYu1i/prlfIEjdY9+nL1HCqyvjje66DrkqCCPl7wqiOM+nzLOpS6Ol5i5mhXl8G9JgX4Zx+4F4yD7pB36ZnZGoL6QYgxaXihWX61b+ErbN+mHLUelRq3U0V2XWeAawzongBv2dU7Sk44rLnqrNmz8x+1ffOgNqllKSlr/td7847hSG/XS2HtryFeSb0RDhUT/pTUbdys2op1ujvDNWuckwSdR25et02FbMaXHbdXMr77R7P1WRfYcpiHNM8dLBNdWQ5+NWrBuhxYu26oEHx9dOLpZM7/6QfMOWRXY8mG9ckelzAGhmfPtq6a6dqotpxEXmesM/LvOSZDq1bFqw9KtWr10mTYnWFUiOVYbZn+jj+cfV3pQfQ199o6stQ+yrUWQJXMth5B/H7NC7VDFLovSytUrnO73/T+PKr1qV736aBOF+Pgqon6oYpf9ozWrlmv1sVSVsJ7S9qU/6eNpu5TgE6Y+zz2qprkMG/E/sVWzNh7V4dNWhVqkEId1TtyvqRCYtZ5BsUoNFR63QsvWbdSC9Yek4qmK37NCX30+SxsSpMieT2lwk5yfRMZ9Sqlxm7V0yznJL0F7lv6kj3/YrtMl62voC3epqv3Ca5lQpUYt16atUVp61KoA23FtWvCzPp91TOWqBOrsuRAP1oYoiPeumOI2/6UN++N0snhJqVigqpatpNCL67Rg1zFtiDqkYiVSFX9os+b98LU+X5msyCqBOnvuomq26qB6wb6qWiXz+LEfw2djtGXpT/r0j8NKKdlcg59qqQo+knRRe5ev1p6Dp5RQqrjkW07hIWW8Ov5cKtVAjX02asHazVqyfr8upafr7NHNWd+j0q0e1ku32b8b/36ns7+fRmXUsIFVG5Zs1opl67T/0kUpIeO5ffjfo0ov1VhDh3XNem5lyiZr9cI92rJ2pQ6rhNLjtmreD1M1LbqUIoMSdTbUsM6Jy++RG5nHgNJPavXyzO///hX66vNfte5MgFoPek73VMl8IgV1vLi7n8zzlrKt/WRcz8XFuh8Fcdx7/Phu2M+lazdrSeZ3PW7LXE36ao0O2yTZHM4LYcUUu2iz9u7eqg0XrPJPPqEta3/Xl5/N167yYQo7m6iTobXdv153r8FBBf84/Xf5Vh05KTXt0V/t7Z8jcI2j5QTIRdLhPVq8fHOOf3tOOnTnKlZZj/Zv6zA7V4A69u+pWsWyd++K6DFUY7uFKvq/MzTp2/nalF5RXQeP1rs9nFc6jujxqiaP6KYby8Zr8bRpGfsmhOrGB4Zq0pOuZwLzSEQ3vTvxKT10QynFLJ2rSZNnaPqmJEW06Kax45/Kti6LxwKba+gHr2pIq1I6uTzzfrekqlrnB/Tp6w7Tvtr361xRl7bM15eTp+mHxcfle10rDR/3arZpXF0p0aqnHm0UorRN8zVp8i9accK4R24CdOOTb+vTJ1upWvpeTf92miZNW6VDfpVdfhbu1dfwcQ+oqXWjZkyeph+WxqvMDa7exzD1HDlUD7UI07lN8/Xl5BmatTtQXUeM0ZBmXlRO8v3eBahj755qWi5NG3+dpknfrdZJSRE9Rmhsz9oqf25Hxnvx4zIdLHuLXpv4nsZ0DpMUrz3RmVeMI7ppzDsPqGPlJG2aPUOTJk/TD8vPq3zHB/TpB/3VMKtLWKTuGdBKEcXjNP/baRr/38yuXAVw/EX0eFWfDm6lajqiP6dN06RvF2mbKqrrk6/qk8fz+d0Y94C6Vpd2z894brN2KuMYnjDQ+bmFd9PYkd10Y1iy1s2aoUnfL9K2ku00dvxjyq1Rw1OtB43W8EZpWj5zmiZNW6+YsvX10MgxGupqinJX8n285FcBHff5lHW8pGZ817+cG60SNz2gEXcaWkf9mmvomAfUMVKK+Ttj0oYZ65LU8LFX9fXrD6ppWUn7D+Sv217ddupUNqMVpWNLDz9H4BpgsdlstsSiGY0GwC52rp4fOV/q+aom9nAzIw2uGOsmD9UHa+tr+DdP6UbjRiA/Nnyh+yftUOshl2vAOgrPDn3z9Bda3uIpTXkkD11zgasULScAAABFLGn9Ui1PClHXTgQTwBHhBAAAoEgc1eJvv9aXn4/X8C93KKnWLS4W8gWubYQTAACAIlFcl47t0OL1R5UU1kqvPdvJ4wUdgWsFY04AAAAAmAItJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwAAAABMgXACAAAAwBQIJwCAfFk3eajuf/QLrTNuKGjndmjWj4sUYywHAFw1CCcAgCtAnGZN+ELTdyYZNwAAriKEEwAAAACmQDgBAAAAYAoWm81mS7yQbCwHAJjVhi90/6Qdaj14tFpunazJa+KUlCqVqFBZHXsN1KMtQv/dN3aunh85X9UHj1DtFV/oxy3ndckvTD1ffEl9avhKSlb00h/1n1k7FH0uVZKvAiNr6Z7efdWtbojjo0rp8Vr389f6fvlRnbwolYhorCHPDpRmDtUHa+tr+DdP6UaHx1TPVzWxR5jTXcTMHq3nZ0l93hmtnuH/lp/dNV9TZizUpuhkXbI/hwceUbdaAVn35zTWpGV//fxkcyn9vLYt/FE/ztmr6MTM5x9eUW17Gt4HAMAVodjo0aNHX0pNM5YDAMwqNkoz151U/M61WnaomJp2666eN5VVwrZtWr1iiTb43aDOtYIy9k3cowUL9yt65z/anFZTvR9qr3olSqtuh3qq4JOsdZPf0Ft/HpUqX6+77uqkjvVK6sSWf7Rk4WrtCqqndtfZA0qc5o4Zqy83nlfJJl318F03qnbaRn3zzVZdCIlX3MnyuvnuFopweEzVa6eudTKfR6aE3Uu0YJfU8JYOqhecURYz+20N+/ofxdjC1PmeO3X79SV1bOM/WrZkhRJq36qmlQJVoXIZpR7Yo5iA+nqof2e1u76mqpa1aOP/val3551Sycad9GCPG3VzrZI6sWO71q1Yov2hHdWmmq/T4wMAzI1uXQBwhUpKLqU+b4zW8B5t1aZ9b702cbi6lZOif/1FKy4a9rU20EtjB6pby7bqOaCbGhaTLq36Wh+sTVZgy/764LWB6tm+ldp0eUBjJwxV13LJ2vbzj5p/OuP2l1b9oh8OSxE9RujTId3UsWUrdRswWpMHFdfu7c6P5ZXT8zV5VpwU0UkTJ47Qo10ynsO743urqZI1f+Z8nfWrrKYtGysyQFJAFTVt2UptaoRK6Ru1eFWy1KKXJg7OeE5tujygse/0VlNfX0VvjRLD5wHgykI4AYArVOkuD6pnhENBsUj17lFbSt+j+cvPO2yQ1KCWGhZzLDivxUv2SKqthx9trkDHTb611bdHbSn9qJasjZOUrJVrM/btdkdlxz0V2OI2dSrrVOSVk2ujtFdS6549FeH4/ALbqmNzX5U4dUDbEh3KXTl2SNvOOfw7sK1G/OcDTR7c1vl1AQBMj3ACAFeoBjUijUUqUTNSEZL2Rkc7lZcu5zDAQ5IUrT0HJIVXVm0/wyaH+4k+dFRStA7uc7dvpGrXMpZ5LvpQnKRQVa5k3CLd+OQH+u6jp9TGuWfYv4o1VcdWAVLMKr317FD1GzFeX85erm2xqcY9AQBXCMIJAFyRQlXBeay5s3TnCnpJ/wCnf0upUrokH0/GZCQpMcnTfYtSgJo+PkbvPtJKtSr46tKJo1o8a4beGjlc9z/7gWbtZbIXALjSEE4A4IoUrxNxxjJJ1oxQEhiYW4cmX6nYv/vnLFBBgZ7um7NLKfm/D2cBimz/gMaO/0A/Tx6tsc90U9daISpxLlrTx3+dbewNAMDcCCcAcIU6GJM9nVw6cFQxkupEZu/y5SxSta+TFLvHebxGpkv7ohUjKbJaZUmRql7T3b5xijlqLMtwIcXYcpGsgzHOY2EiKoVKitfRY07FkqSYWaPVb+jbmr7fuCXTlml6cuhwjfo78z79QlWrSVc9OvJtDWmZMfXxycwB/QCAKwPhBACuUDF/zdI6x+moUvfox9l7pMD66nijsRuXUYg6dqgt6aimTzPMamW/H1VW65ZhkgLUul19BbrYN2nDLM1xWoBEUtlSqiDp7J49OutYHrNQCwwze0W0uVm1JK2cPVcx6Q4b0qO1eFW8Llkrq0F1h3LH1ps61RSZnKq9SxY631bJupAoSQHKtQEJAGAqrHMCAFeazHVOlH5Sq5dvVYKPjy7sX6GvPv9V684EqPWg53RPlczxITmsOVKsSiNVjVuhZWujtHTbMaXaUhS/x34/vorsPUiDG2Wsc1KsUkOFZ+17TvJL0J6lP+nTX/YrwUeSzWGdk+IVVCJuiVZv3aVFW86pmH+C9qz9XV/+3zoVqxSqs+cc1jkJqKk6Phu1YO1mLVl/SCqemvEcPpuhNWcC1HrQEHWP8JVUTHGb/9KG/XE6WbykVCxQVcvXUaQ2asG6rf/e9vgOLfx2iqbvTlNgx756vmX5jBcbO1fPD/tU30RZdXOn2jIsLwkAMAnCCQBcaTLDSeunRus+yzr9tmCNVm6OU0qluuoz+Dk93NCh1SSHcCL5KqJ5GzUOjNGuzbu1Yu1mrdsZJ2tYXfUaNEyDb86s2Bv23bL2H61cu01bYqR63Z7QXcHrtCHGIZzIVxFNm6tq0j7t3Lpda9Zt065jvmo64AWNqHtIM9ddcFqEMaROO3Wua9XhzZu1bMVGrd7m6rX4KrKqv/Zv2aWNGzZr3UE/te9YW2F1blb78Dht35j5/KP26IhvmDo/PFQj74xU1uzEme9DQkhNdSWcAIBpWWw2my3xgrFfMADAtDZ8ofsn7VDrIZ9qaDPjRgAArlyMOQEAAABgCoQTAAAAAKZAOAEAAABgCow5AQAAAGAKtJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAXCCQAAAABTIJwAAAAAMAWLzWazJV5INpYDAEzEZrMZi1BALBaLsQgAcJkQTgDAhGw2m86esWnvfpuOHLYp9rhNZ05LCYk2XbpEZTq/SpSwKTjIojJlpfCKFlWpalGtGhaVLmMhrADAZUQ4AQCTsNlsSkqS1q63avMmqw4dNu6BwlatqtS4iY9atvBRYCCtKgBQ1AgnAHCZ2VtJFi22avlKum+ZRdvWFnXq6ENrCgAUIcIJAFxGNptN8xek63/zCSVmdVtXi7p2KUZAAYAiQDgBgMvAZrPpyBGrfv2N7ltXgmpVpXvv8VGVKj6EFAAoRIQTAChiNptN66Os+mma1bgJJvfgAz5q0ZyAAgCFhXACAEXIZrNp8ZJ0zZ5DN64rVY/uFnXsQDcvACgMLMIIAEWEYHJ1mD0n43Nk7RkAKHiEEwAoAjabTevWE0yuFrPnZHyeBBQAKFiEEwAoZPbB79N+piJ7NZn2c8bnSkABgIJDOAGAQmaz2TTzNwa/X41m/kY4AYCCRDgBgEJkX8fkMNMFX5UOH5bmL6B7FwAUFMIJABQSm82mM2esmr/AuAVXk/kLpDNnaEEBgIJAOAGAQrRoEd25rgV8zgBQMAgnAFAIbDabEhOtWrHKuAVXoxWrpMREWk8AIL8IJwBQSNZFUVG9lvB5A0D+EU4AoBDYbDZt3kRl9VqyeZONlhMAyCeLzWazJV5INpYDAPLIZrPp9Ol0jX27qCqqARr7pa/Cs/5t065fEvT+39m3x646r1FTs3YsWp0D9cl9xRUoKWl3ioZ9cMnFNuNzv7KMetWismWLyWKxGDcBADxAywkAFIJ9+4wlhcUYTCTJorr3BeuFzk6FphJYp4T6GwuvAkX3uQPA1YlwAgAFzGaz6fCRomk16TK8eGYwsWr5oPMaOChFu5KUEVDaBhh3NxEftR1j5ueXN4eP0LULAPKDcAIAhSDueBFXUJOsipUkXdL7mzKntQ2xqIvzXpKk/mNC9PWXmX+uAkL/oH+3fxmirz8MdLqfLsODs8r72///yxB9/WWQd60hYcU9at1xer5fhuiT4SUcNmY+V8fnmPX8HZ5P50B9YiwrBEX+uQPAVYZwAgAFyGbLuHJ+5nTRjDlYEJ9ZGQ4srt5fhmhsf0lTEzVw0HkNfC5JxvUfw1uFqG2YQ0GYr1Nlv8vwYH3dyvDTEFhcvV1V6gOLq20dx9fpo7au9svGpiR7686dzsHHWQm98KHh+UoKrOP/b6iKsSpJkgJ9srq29b/O/vx9VDPzyXRpVEyBkhSXrsIccnPmtCXrGAAAeI9wAgCFIPFCEVVOpyZqedy//wxvlUOLiCQlpWnGoPMaOOh81u0CqxTPCAidA9UtM2wk7U7JCDhZ3cRcd8OKXZVxXwN/ScsICQ6BICeHN2XuH1hcbd3s32W4n+oGKnOAv+FxwnwzgtjfaTqc+fwyHreEyof8ex+lQjOCV3ipjNcVe6BwJ4Apss8dAK5ShBMAKGA2m02XLhVNy4kkTX3DMRxkCvPN1h1LkmI3/duaEnvOuSKd1bqQlKa5WTNpOXQTCyvm3CqSlKbl9maIv5O0ITPshF+XPcRkE5OkubszHj+8lavWlhJqVMUelC7+O3tXtse5pC2Z43vCrwuQOhdX1UBJSTYlZQWvANUMkySbzsVk3k8huXQpo+UEAJA3hBMAuBr8naRhhhaRnFolcnTe5twdzN51ysiwnzHs5GbBB2mZ42R81Kyt+5+jc/EOUw67eJwFW9Iznl+IRV0ifDKmKj6S2aIS6KPw/sUyunwlpWvLFTpFMQBcK9z/GgAA8qxECe8q6nmTMSbDOEh86hv2rlh5ZBxIn1nhz8awn73rlOeSNWpVRqtMYJibx3DommWX7XHsXbsCi6lbEx9JNh3ekqyT55URfJpktsAcScs2BqegFc3nDgBXL8IJABSCoJLeVtTz4lJmBdywbkj/EpljNbyT1QIRWFzdssJOCb3QJPOnwjiY3LFlpnOgmmUOXPdqXMfUS26C1L/dtQLr+P07q5fLx7G/DxYFZo5ROfm3NPVAZvAJtGQGFucWmMJQNJ87AFy9CCcAUMAsFovKlC2aK+hTl/87EL2tfbpd+2xbjmNCPOEwniOwjn/mdLz+mUHHquVvZA8dWQPwM1d+9/oxdUnv/2kYL5Pp325fFtW9z/A4calOK93bg4jkEKIcu6MVUZeuMmVtrA4PAPlAOAGAQlCxgrGkkPydpGGDUjMr8Q7iUl1OJZybqW+c18DMrlZZktI0Y1Bi9il4k9K0PHNQu/3fM/LwmI6hyFmyRjmOocmUtDtFA41BaWp61nuQdC4943+yZvLKPj6msBTZ5w4AVymLzWazJV7IfjUMAJA3VqtVa9elafqMq/MKepfhwepdx5L3MHIV69PbppY3FpePD9f+ACAvOHsCQCGocZ2xBNcCPncAyB/CCQAUMIvForJlfVSlStGMO4E5VKliU9myPow5AYB8oFsXABQwm80mq9WqRYvT9ed/qaheK+68w6ZOHYvJx4eAAgB5RcsJABQwi8Uii8WiFs2NW3A1a9H8388eAJA3hBMAKAQWi0VBQT5qdZNh5itclVrdZFVQEC0mAJBfhBMAKCQWi0Xt2xtLcTVq3z7j8wYA5A/hBAAKgcVikY+Pj8qW9VHnzrSeXM06d7aqbFkfxpoAQAEgnABAISpWrJi6dLaoSmVm7roaValsU5fOFhUrVsy4CQCQB4QTACgk9sHRPj4+uvsuWk+uRnffZc1qMaHVBADyj3ACAIXIHk6qVi2m3r0IKFeT3r2sqlqVqYMBoCARTgCgkFksGd1+mje36I47CChXgzvusKp584zPlWACAAWHcAIAhcze5adYsWLq0M6iO25PN+6CK8gdt6erQ7t/gwnhBAAKDivEA0ARsa8cn56erqgom2bM5PrQlaZ3r39bTOjOBQAFj3ACAEXIMaAcPmzVH7N9dOQoFVyzq1LZprt6WFW1qg/BBAAKEeEEAIqYzWaTzWZTenq60tPT9fdC6e+FTEVrVp1vSVfnWzKmhaYrFwAULsIJAFwG9oBitVpltVoVH5+u5cstWrWGkGIWrW5KV9u2NoWGZrSUMGUwABQ+wgkAXEaOrShWq1UXEq2K+seirVvp7nU5VKls0/XXW9X8BptKBmUEElpLAKDoEE4A4DKz2TJWj7darU6tKWfO2HQg2qKYo1LcCYvOnrEo8YJ06RKV5PwqUcKmoJJS6TI2hVWwKaKydF2kTWXKZKxLY28l8fHJmLSAYAIARYNwAgAmYQ8p9tYUe1Cx/7/jPsg/e+Cwt4o4dtty3AYAKDqEEwAwIWMYMf4X+WcMIMb/AgCKHuEEAK4AhJLCQxgBAPMgnAAAAAAwBZYnBgAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAKhBMAAAAApkA4AQAAAGAK/w8Jwem+7TDvrAAAAABJRU5ErkJggg==">
We are working on getting the other models to a finalized version like Nano! |
alperenyildiz/R4VD_GRPO_LLAMA_FULL | alperenyildiz | 2025-05-22T13:15:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"grpo",
"GRPO",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T12:55:34Z | ---
library_name: transformers
tags:
- trl
- grpo
- GRPO
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leeccNLPLAB/unsloth-Llama-3.1-8B-Instruct-bnb-4bit-BookSQL-v1 | leeccNLPLAB | 2025-05-22T13:14:21Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T12:34:15Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
feilongfl/Qwen3ChineseNewsSummary | feilongfl | 2025-05-22T13:13:38Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"llama-factory",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-22T12:47:56Z | ---
license: apache-2.0
tags:
- llama-factory
---
|
CheeLi03/whisper-base-ko-puct-2k | CheeLi03 | 2025-05-22T13:11:05Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"region:us"
]
| null | 2025-05-22T12:06:49Z | ---
language:
- ko
license: apache-2.0
base_model: openai/whisper-base
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: Whisper base Korean Punctuation 2k - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: ko_kr
split: None
args: 'config: ko split: test'
metrics:
- type: wer
value: 28.794326241134755
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base Korean Punctuation 2k - Chee Li
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4727
- Wer: 28.7943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.039 | 6.2893 | 1000 | 0.4318 | 28.4043 |
| 0.0085 | 12.5786 | 2000 | 0.4727 | 28.7943 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
zf31265639/dqn-SpaceInvadersNoFrameskip-v4 | zf31265639 | 2025-05-22T13:10:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-22T13:09:50Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 460.00 +/- 193.74
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zf31265639 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zf31265639 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zf31265639
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
najwaa/absa-digital_cameras-polarity-p2 | najwaa | 2025-05-22T13:10:12Z | 0 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"region:us"
]
| text-classification | 2025-05-22T13:09:38Z | ---
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Great value for money with:Great value for money with reasonable pricing that
fits most budgets perfectly.
- text: and washed out colors throughout.:Photo quality is terrible with blurry images
and washed out colors throughout.
- text: with incredibly detailed images and vibrant colors:The photo quality is absolutely
stunning with incredibly detailed images and vibrant colors.
- text: Expensive beyond justification -:Expensive beyond justification - the money
could be better spent on alternatives.
- text: camera is extremely easy to use with an intuitive:The camera is extremely
easy to use with an intuitive screen and simple settings.
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: false
base_model: sentence-transformers/all-mpnet-base-v2
---
# SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_sm
- **SetFitABSA Aspect Model:** [setfit-absa-aspect](https://huggingface.co/setfit-absa-aspect)
- **SetFitABSA Polarity Model:** [najwaa/absa-digital_cameras-polarity-p2](https://huggingface.co/najwaa/absa-digital_cameras-polarity-p2)
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| positive | <ul><li>'The autofocus is lightning fast:The autofocus is lightning fast and the image quality is absolutely stunning for portraits.'</li><li>'fast and the image quality is absolutely stunning:The autofocus is lightning fast and the image quality is absolutely stunning for portraits.'</li><li>'sharp, the auto focus is easy to:the images are sharp, the auto focus is easy to use, and the lens is decent quality.'</li></ul> |
| negative | <ul><li>', but the menu system is confusing for:Beautiful sharp photos with vibrant colors, but the menu system is confusing for beginners.'</li><li>', though the LCD screen is hard to:The lens produces crisp images even at maximum zoom, though the LCD screen is hard to see in bright sunlight.'</li><li>'on, but settings navigation is cumbersome.:The zoom range is impressive and focus accuracy is spot-on, but settings navigation is cumbersome.'</li></ul> |
| negative | <ul><li>'not worth the price value.:definitely not worth the price value.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"setfit-absa-aspect",
"najwaa/absa-digital_cameras-polarity-p2",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 7 | 20.5822 | 57 |
| Label | Training Sample Count |
|:----------|:----------------------|
| negative | 97 |
| negative | 1 |
| positive | 115 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0014 | 1 | 0.3121 | - |
| 0.0140 | 10 | - | 0.2740 |
| 0.0280 | 20 | - | 0.2615 |
| 0.0420 | 30 | - | 0.2430 |
| 0.0560 | 40 | - | 0.2219 |
| 0.0700 | 50 | 0.2693 | 0.1975 |
| 0.0840 | 60 | - | 0.1651 |
| 0.0980 | 70 | - | 0.1169 |
| 0.1120 | 80 | - | 0.0611 |
| 0.1261 | 90 | - | 0.0338 |
| 0.1401 | 100 | 0.126 | 0.0204 |
| 0.1541 | 110 | - | 0.0076 |
| 0.1681 | 120 | - | 0.0071 |
| 0.1821 | 130 | - | 0.0047 |
| 0.1961 | 140 | - | 0.0032 |
| 0.2101 | 150 | 0.0126 | 0.0029 |
| 0.2241 | 160 | - | 0.0027 |
| 0.2381 | 170 | - | 0.0032 |
| 0.2521 | 180 | - | 0.0035 |
| 0.2661 | 190 | - | 0.0032 |
| 0.2801 | 200 | 0.0044 | 0.0027 |
| 0.2941 | 210 | - | 0.0027 |
### Framework Versions
- Python: 3.11.12
- SetFit: 1.1.2
- Sentence Transformers: 4.1.0
- spaCy: 3.7.5
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MinaMila/gemma2_2b_LoRa_ACSEmployment_2_cfda_ep9_22 | MinaMila | 2025-05-22T13:07:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T13:07:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pandaiedu/pandai-unsloth-gemma-3-4b-it-merged-sejarah-1-epoch-iter-1-gguf | pandaiedu | 2025-05-22T13:03:00Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:pandaiedu/pandai-unsloth-gemma-3-4b-it-merged-sejarah-1-epoch-iter-1-gguf",
"base_model:quantized:pandaiedu/pandai-unsloth-gemma-3-4b-it-merged-sejarah-1-epoch-iter-1-gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-22T13:01:12Z | ---
base_model: pandaiedu/pandai-unsloth-gemma-3-4b-it-merged-sejarah-1-epoch-iter-1-gguf
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** pandaiedu
- **License:** apache-2.0
- **Finetuned from model :** pandaiedu/pandai-unsloth-gemma-3-4b-it-merged-sejarah-1-epoch-iter-1-gguf
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
najwaa/absa-headphones-aspect-p2 | najwaa | 2025-05-22T12:55:15Z | 0 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"region:us"
]
| text-classification | 2025-05-22T12:55:09Z | ---
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: processing speed:The processing speed is excellent I swear. Worth the money.
- text: hurt:These headphones are extremely uncomfortable and hurt my ears after short
use.
- text: hours:These headphones are incredibly comfortable and fit perfectly for hours
of use.
- text: workday:The battery life is exceptional and lasts throughout my entire workday
effortlessly.
- text: sound:The sound lacks depth and the audio quality is disappointing and flat.
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: false
base_model: sentence-transformers/all-MiniLM-L6-v2
---
# SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_sm
- **SetFitABSA Aspect Model:** [najwaa/absa-headphones-aspect-p2](https://huggingface.co/najwaa/absa-headphones-aspect-p2)
- **SetFitABSA Polarity Model:** [setfit-absa-polarity](https://huggingface.co/setfit-absa-polarity)
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aspect | <ul><li>'sound quality:Amazing sound quality with deep bass that really makes music come alive.'</li><li>'bass:Amazing sound quality with deep bass that really makes music come alive.'</li><li>'audio:The audio is crystal clear but they become uncomfortable after wearing for more than an hour.'</li></ul> |
| no aspect | <ul><li>'music:Amazing sound quality with deep bass that really makes music come alive.'</li><li>'crystal:The audio is crystal clear but they become uncomfortable after wearing for more than an hour.'</li><li>'hour:The audio is crystal clear but they become uncomfortable after wearing for more than an hour.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"najwaa/absa-headphones-aspect-p2",
"setfit-absa-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 18.3499 | 52 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 271 |
| aspect | 152 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.3624 | - |
| 0.0033 | 10 | - | 0.3137 |
| 0.0066 | 20 | - | 0.3121 |
| 0.0099 | 30 | - | 0.3096 |
| 0.0132 | 40 | - | 0.3062 |
| 0.0165 | 50 | 0.3537 | 0.3019 |
| 0.0198 | 60 | - | 0.2968 |
| 0.0231 | 70 | - | 0.2916 |
| 0.0264 | 80 | - | 0.2862 |
| 0.0297 | 90 | - | 0.2808 |
| 0.0330 | 100 | 0.309 | 0.2757 |
| 0.0363 | 110 | - | 0.2711 |
| 0.0396 | 120 | - | 0.2671 |
| 0.0429 | 130 | - | 0.2640 |
| 0.0462 | 140 | - | 0.2625 |
| 0.0495 | 150 | 0.2754 | 0.2617 |
| 0.0528 | 160 | - | 0.2618 |
| 0.0561 | 170 | - | 0.2618 |
| 0.0594 | 180 | - | 0.2619 |
| 0.0627 | 190 | - | 0.2624 |
| 0.0660 | 200 | 0.2608 | 0.2618 |
### Framework Versions
- Python: 3.11.12
- SetFit: 1.1.2
- Sentence Transformers: 4.1.0
- spaCy: 3.7.5
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/DAPO_KTAE-7B-GGUF | mradermacher | 2025-05-22T12:52:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:SunW7777/DAPO_KTAE-7B",
"base_model:quantized:SunW7777/DAPO_KTAE-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-22T12:11:05Z | ---
base_model: SunW7777/DAPO_KTAE-7B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SunW7777/DAPO_KTAE-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE-7B-GGUF/resolve/main/DAPO_KTAE-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
fitrilailyy/llm-assignment2-SFTonly | fitrilailyy | 2025-05-22T12:51:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-22T12:49:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pandaiedu/pandai-unsloth-gemma-3-1b-it-merged-sejarah-1-epoch-iter-1-gguf | pandaiedu | 2025-05-22T12:50:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:pandaiedu/pandai-unsloth-gemma-3-1b-it-merged-sejarah-1-epoch-iter-1",
"base_model:quantized:pandaiedu/pandai-unsloth-gemma-3-1b-it-merged-sejarah-1-epoch-iter-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-22T12:50:09Z | ---
base_model: pandaiedu/pandai-unsloth-gemma-3-1b-it-merged-sejarah-1-epoch-iter-1
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** pandaiedu
- **License:** apache-2.0
- **Finetuned from model :** pandaiedu/pandai-unsloth-gemma-3-1b-it-merged-sejarah-1-epoch-iter-1
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nabil-tazi/autotrain-yr729-mb8s3 | nabil-tazi | 2025-05-22T12:26:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-22T12:19:31Z |
---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- autotrain
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: sentence-similarity
---
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
loss: 0.12417348474264145
runtime: 1.2731
samples_per_second: 113.897
steps_per_second: 3.927
: 4.756756756756757
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
```
|
MayeulCr/MNLP_M2_quantized_model | MayeulCr | 2025-05-22T12:26:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-22T12:25:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaestrAI/tara_green-lora-1747916623 | MaestrAI | 2025-05-22T12:23:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-22T12:23:42Z | # tara_green LORA Model
This is a LORA model for character Tara Green
Created at 2025-05-22 14:23:43
|
zf31265639/Taxi-V3 | zf31265639 | 2025-05-22T12:23:18Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-22T12:23:14Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zf31265639/Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_Adult_cfda_ep9_22 | MinaMila | 2025-05-22T12:22:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T12:22:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zf31265639/q-FrozenLake-v1-4x4-noSlippery | zf31265639 | 2025-05-22T12:22:07Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-22T12:22:03Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zf31265639/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sravanthib/test | sravanthib | 2025-05-22T12:10:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:finetune:NousResearch/Hermes-3-Llama-3.1-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-23T10:46:26Z | ---
base_model: NousResearch/Hermes-3-Llama-3.1-8B
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for test
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sravanthib/test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/golden-goose/huggingface/runs/bhunvn4v)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0a0+df5bbc09d1.nv24.12
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
budionosan/testdulula | budionosan | 2025-05-22T12:02:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
]
| null | 2025-05-22T12:00:58Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
johngreendr1/9e17b4c4-ca5f-4017-8f5a-cf1bfdceffad | johngreendr1 | 2025-05-22T11:54:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"region:us"
]
| null | 2025-05-22T11:54:33Z | ---
base_model: unsloth/gemma-7b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
CheeLi03/whisper-base-ko-puct-4k | CheeLi03 | 2025-05-22T11:53:55Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"region:us"
]
| null | 2025-05-22T09:47:54Z | ---
language:
- ko
license: apache-2.0
base_model: openai/whisper-base
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: Whisper base Korean Punctuation 4k - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: ko_kr
split: None
args: 'config: ko split: test'
metrics:
- type: wer
value: 29.131205673758863
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base Korean Punctuation 4k - Chee Li
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5314
- Wer: 29.1312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0363 | 6.2893 | 1000 | 0.4354 | 29.1489 |
| 0.0045 | 12.5786 | 2000 | 0.4961 | 28.9894 |
| 0.0025 | 18.8679 | 3000 | 0.5219 | 28.8121 |
| 0.0018 | 25.1572 | 4000 | 0.5314 | 29.1312 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
telegramlinkvideo/katrina.lim.viral.kiffy.telegram.link.video | telegramlinkvideo | 2025-05-22T11:45:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-22T11:43:51Z | Watch 🟢 ➤ ➤ ➤ <a href="https://newvidgallery.com/ergergre"> 🌐 Click Here To link (katrina.lim.viral.kiffy.telegram.link.video )
🔴 ➤►DOWNLOAD👉👉🟢 ➤Watch 🟢 ➤ ➤ ➤ <a href="https://newvidgallery.com/ergergre"> 🌐 katrina.lim.viral.kiffy.telegram.link.video
|
prithivMLmods/open-scene-detection | prithivMLmods | 2025-05-22T11:41:50Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"siglip",
"image-classification",
"SigLIP2",
"Scene-Detection",
"buildings",
"forest",
"glacier",
"mountain",
"sea",
"street",
"en",
"dataset:prithivMLmods/OpenScene-Classification",
"base_model:google/siglip-base-patch16-512",
"base_model:finetune:google/siglip-base-patch16-512",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-19T07:22:48Z | ---
license: apache-2.0
datasets:
- prithivMLmods/OpenScene-Classification
language:
- en
base_model:
- google/siglip-base-patch16-512
pipeline_tag: image-classification
library_name: transformers
tags:
- SigLIP2
- Scene-Detection
- buildings
- forest
- glacier
- mountain
- sea
- street
---

# open-scene-detection
> open-scene-detection is a vision-language encoder model fine-tuned from [`siglip2-base-patch16-512`](https://huggingface.co/google/siglip-base-patch16-512) for multi-class scene classification. It is trained to recognize and categorize natural and urban scenes using a curated visual dataset. The model uses the `SiglipForImageClassification` architecture.
```py
Classification Report:
precision recall f1-score support
buildings 0.9755 0.9570 0.9662 2625
forest 0.9989 0.9955 0.9972 2694
glacier 0.9564 0.9517 0.9540 2671
mountain 0.9540 0.9592 0.9566 2723
sea 0.9934 0.9898 0.9916 2758
street 0.9595 0.9819 0.9706 2874
accuracy 0.9728 16345
macro avg 0.9730 0.9725 0.9727 16345
weighted avg 0.9729 0.9728 0.9728 16345
```

---
## Label Space: 6 Classes
The model classifies an image into one of the following scenes:
```
Class 0: Buildings
Class 1: Forest
Class 2: Glacier
Class 3: Mountain
Class 4: Sea
Class 5: Street
```
---
## Install Dependencies
```bash
pip install -q transformers torch pillow gradio hf_xet
```
---
## Inference Code
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/open-scene-detection" # Updated model name
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Updated label mapping
id2label = {
"0": "Buildings",
"1": "Forest",
"2": "Glacier",
"3": "Mountain",
"4": "Sea",
"5": "Street"
}
def classify_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=6, label="Scene Classification"),
title="open-scene-detection",
description="Upload an image to classify the scene into one of six categories: Buildings, Forest, Glacier, Mountain, Sea, or Street."
)
if __name__ == "__main__":
iface.launch()
```
---
## Intended Use
`open-scene-detection` is designed for:
* **Scene Recognition** – Automatically classify natural and urban scenes.
* **Environmental Mapping** – Support geographic and ecological analysis from visual data.
* **Dataset Annotation** – Efficiently label large-scale image datasets by scene.
* **Visual Search and Organization** – Enable smart scene-based filtering or retrieval.
* **Autonomous Systems** – Assist navigation and perception modules with scene understanding. |
digitalparth/DigitalMadhav | digitalparth | 2025-05-22T11:36:12Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-05-22T11:36:06Z | ---
license: cc-by-nc-4.0
---
|
mradermacher/DAPO_KTAE_1.5B-GGUF | mradermacher | 2025-05-22T11:28:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:SunW7777/DAPO_KTAE_1.5B",
"base_model:quantized:SunW7777/DAPO_KTAE_1.5B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-22T11:17:28Z | ---
base_model: SunW7777/DAPO_KTAE_1.5B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SunW7777/DAPO_KTAE_1.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DAPO_KTAE_1.5B-GGUF/resolve/main/DAPO_KTAE_1.5B.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
declare-lab/nora-finetuned-libero-10 | declare-lab | 2025-05-22T11:27:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-22T10:40:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
angrotanak/xlmr-intent-results | angrotanak | 2025-05-22T11:20:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-21T20:41:12Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlmr-intent-results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-intent-results
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8710
- Accuracy: 0.7273
- F1: 0.7235
- Precision: 0.8953
- Recall: 0.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.7686 | 1.0 | 11 | 1.7656 | 0.3182 | 0.1536 | 0.1012 | 0.3182 |
| 1.7248 | 2.0 | 22 | 1.7412 | 0.1818 | 0.1340 | 0.1061 | 0.1818 |
| 1.6487 | 3.0 | 33 | 1.5732 | 0.4091 | 0.2485 | 0.1818 | 0.4091 |
| 1.6085 | 4.0 | 44 | 1.4907 | 0.3182 | 0.1782 | 0.1237 | 0.3182 |
| 1.5086 | 5.0 | 55 | 1.3331 | 0.4545 | 0.3348 | 0.4213 | 0.4545 |
| 1.4009 | 6.0 | 66 | 1.2478 | 0.5455 | 0.4597 | 0.5752 | 0.5455 |
| 1.271 | 7.0 | 77 | 1.1301 | 0.5 | 0.4303 | 0.5501 | 0.5 |
| 1.0579 | 8.0 | 88 | 0.9797 | 0.6818 | 0.6703 | 0.8644 | 0.6818 |
| 0.9895 | 9.0 | 99 | 0.9068 | 0.7273 | 0.7235 | 0.8953 | 0.7273 |
| 0.899 | 10.0 | 110 | 0.8710 | 0.7273 | 0.7235 | 0.8953 | 0.7273 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.7.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Chang-Hoo/gemma-3-4b-pt-2024 | Chang-Hoo | 2025-05-22T11:19:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T11:14:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vaibhavss/results | vaibhavss | 2025-05-22T11:19:02Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-21T08:57:45Z | ---
library_name: transformers
license: mit
base_model: openai-community/gpt2-medium
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [openai-community/gpt2-medium](https://huggingface.co/openai-community/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2471 | 0.5618 | 50 | 0.2137 |
| 0.1842 | 1.1236 | 100 | 0.1501 |
| 0.1631 | 1.6854 | 150 | 0.1109 |
| 0.0842 | 2.2472 | 200 | 0.0731 |
| 0.1029 | 2.8090 | 250 | 0.0583 |
| 0.0638 | 3.3708 | 300 | 0.0410 |
| 0.0611 | 3.9326 | 350 | 0.0307 |
| 0.0369 | 4.4944 | 400 | 0.0259 |
| 0.0367 | 5.0562 | 450 | 0.0184 |
| 0.0363 | 5.6180 | 500 | 0.0182 |
| 0.0175 | 6.1798 | 550 | 0.0154 |
| 0.0241 | 6.7416 | 600 | 0.0130 |
| 0.0185 | 7.3034 | 650 | 0.0117 |
| 0.0193 | 7.8652 | 700 | 0.0115 |
| 0.0153 | 8.4270 | 750 | 0.0109 |
| 0.017 | 8.9888 | 800 | 0.0102 |
| 0.016 | 9.5506 | 850 | 0.0102 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
mlfoundations-dev/stack_code_shortest_science_longest | mlfoundations-dev | 2025-05-22T11:18:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T00:01:30Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: stack_code_shortest_science_longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stack_code_shortest_science_longest
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/stack_code_shortest_science_longest dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
panchajanya-ai/lid_Indic_Vaani | panchajanya-ai | 2025-05-22T11:16:37Z | 30 | 2 | speechbrain | [
"speechbrain",
"audio-classification",
"en",
"hi",
"ta",
"te",
"mr",
"base_model:speechbrain/lang-id-commonlanguage_ecapa",
"base_model:finetune:speechbrain/lang-id-commonlanguage_ecapa",
"license:apache-2.0",
"region:us"
]
| audio-classification | 2025-05-15T07:21:22Z | ---
license: apache-2.0
language:
- en
- hi
- ta
- te
- mr
base_model:
- speechbrain/lang-id-commonlanguage_ecapa
pipeline_tag: audio-classification
library_name: speechbrain
---
## 🔧 Inference Example
```python
from speechbrain.inference.classifiers import EncoderClassifier
import time
# Load the model from Hugging Face
classifier = EncoderClassifier.from_hparams(
source="panchajanya-ai/lid_Indic_Vaani",
run_opts={"device": "cuda"} # use "cpu" if CUDA is not available
)
# Start timer
start_time = time.time()
# Run classification on an audio file
out_prob, score, index, text_lab = classifier.classify_file("sample_audio.wav")
# End timer
end_time = time.time()
# Print results
print("Probabilities:", out_prob)
print("Score:", score)
print("Index:", index)
print("Label:", text_lab)
print("Time taken:", end_time - start_time, "seconds")
|
Berkesule/qwenvl-2.5-7b-gptq-W4816-quantize-tr-dpo | Berkesule | 2025-05-22T11:15:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
]
| image-text-to-text | 2025-05-22T11:06:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eymericboyer/MNLP_sft_model | eymericboyer | 2025-05-22T11:11:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T08:42:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Northell/mm_v2v_swinv2_v107 | Northell | 2025-05-22T11:09:23Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-05-22T11:09:11Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
aaozgur/qwen25vl | aaozgur | 2025-05-22T11:06:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-22T11:06:33Z | ---
license: apache-2.0
---
|
ivrit-ai/whisper-large-v3-turbo-ggml | ivrit-ai | 2025-05-22T11:02:46Z | 0 | 3 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-02-11T09:45:31Z | ---
license: apache-2.0
---
This version of the model is compatible with ggml based whisper inference engines:
- [Whisper.cpp](https://github.com/ggerganov/whisper.cpp)
- [Vibe](https://github.com/thewh1teagle/vibe) |
rosieyzh/hh_hf-dpo-llama3_1_8b_instruct-checkpoint_7000-seed_42 | rosieyzh | 2025-05-22T10:59:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T10:53:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PJMixers-Dev/Gemma-3-Earthen-v0.2-4B-QLoRA | PJMixers-Dev | 2025-05-22T10:58:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma3",
"text-generation",
"conversational",
"en",
"dataset:BeaverAI/REDACTED1",
"dataset:BeaverAI/REDACTED2",
"dataset:BeaverAI/REDACTED3",
"dataset:BeaverAI/REDACTED4",
"dataset:PJMixers-Dev/Lit-axo-Shuffled",
"dataset:PJMixers-Dev/Mielikki_Erebus-87k-axo",
"dataset:PJMixers/RyokoAI_Honeyfeed3600-Cleanish",
"dataset:PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo",
"dataset:Nelathan/synthetic-sugar-quill",
"dataset:PJMixers-Dev/winglian_visual-novels-json-axo",
"dataset:PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned",
"dataset:PJMixers-Dev/Subtitles",
"dataset:PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo",
"dataset:PJMixers-Dev/Fundus-105K-Formatted",
"dataset:PJMixers-Dev/Fundus-AP-News-Formatted",
"dataset:PJMixers/AP-News-2024",
"dataset:PJMixers-Dev/goodwiki-2024-12-04-axo",
"dataset:epfl-llm/guidelines",
"dataset:PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed",
"dataset:allura-org/gryphe-sonnet-3.5-charcards-names-added",
"dataset:anthracite-org/c2_logs_32k_llama3_qwen2_v1.3",
"dataset:PJMixers-Dev/MinervaAI_Aesir-Preview-Anon",
"dataset:PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT",
"dataset:PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT",
"dataset:grimulkan/aicg-logs-augmented",
"dataset:grimulkan/PIPPA-augmented-dedup",
"dataset:PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted",
"dataset:PJMixers/lodrick-the-lafted_OpusStories-ShareGPT",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Gryphe/Opus-WritingPrompts",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT",
"dataset:allura-org/fujin-instruct-v2",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"arxiv:1910.03771",
"arxiv:2503.19786",
"arxiv:2106.09685",
"arxiv:2305.14314",
"arxiv:2307.08691",
"arxiv:2410.10989",
"arxiv:2411.09009",
"arxiv:2107.04197",
"arxiv:2307.02047",
"arxiv:2010.06192",
"arxiv:2411.16085",
"arxiv:2501.18427",
"arxiv:2403.15279",
"arxiv:2308.05884",
"base_model:google/gemma-3-4b-it",
"base_model:adapter:google/gemma-3-4b-it",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-21T05:02:22Z | ---
base_model: google/gemma-3-4b-it
license: gemma
pipeline_tag: text-generation
library_name: peft
language:
- en
datasets:
- BeaverAI/REDACTED1
- BeaverAI/REDACTED2
- BeaverAI/REDACTED3
- BeaverAI/REDACTED4
- PJMixers-Dev/Lit-axo-Shuffled
- PJMixers-Dev/Mielikki_Erebus-87k-axo
- PJMixers/RyokoAI_Honeyfeed3600-Cleanish
- PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo
- Nelathan/synthetic-sugar-quill
- PJMixers-Dev/winglian_visual-novels-json-axo
- PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned
- PJMixers-Dev/Subtitles
- PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo
- PJMixers-Dev/Fundus-105K-Formatted
- PJMixers-Dev/Fundus-AP-News-Formatted
- PJMixers/AP-News-2024
- PJMixers-Dev/goodwiki-2024-12-04-axo
- epfl-llm/guidelines
- PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed
- allura-org/gryphe-sonnet-3.5-charcards-names-added
- anthracite-org/c2_logs_32k_llama3_qwen2_v1.3
- PJMixers-Dev/MinervaAI_Aesir-Preview-Anon
- PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT
- PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT
- grimulkan/aicg-logs-augmented
- grimulkan/PIPPA-augmented-dedup
- PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted
- PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
- Gryphe/ChatGPT-4o-Writing-Prompts
- Gryphe/Opus-WritingPrompts
- anthracite-org/nopm_claude_writing_fixed
- PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT
- allura-org/fujin-instruct-v2
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
---
# Gemma-3-Earthen-v0.2-4B-QLoRA
[`google/gemma-3-4b-it`](https://huggingface.co/google/gemma-3-4b-it) was trained at 8K with batch size 4 gradient accumulation 4, so each step was 131,072 tokens (including any padding tokens). It was trained for 160 steps, adding up to a total of 20,971,520 unique tokens seen.
This is a small test run. A larger version is planned.
## Quants
- [GGUF from mradermacher](https://huggingface.co/mradermacher/Gemma-3-Earthen-v0.2-4B-GGUF)
## Prompt Format
This model uses Gemma-3 Instruct format, but with system turn support.
## Training Details
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
```yaml
# Requirements before running
# - Get latest commit of axolotl (currently c0a0c75)
# - Download these to axolotl/src/axolotl/prompt_formatters
# - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/formatter_regex.py
# - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/customcompletion-regex.py
# - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/customgemma3-regex.py
# - pip install ftfy
# - pip install git+https://github.com/xzuyn/CAME.git@sr-grams-cautious-8bit
# Weights and Biases logging config
wandb_project: Gemma-3-4B
wandb_entity:
wandb_watch:
wandb_name: Gemma-3-Earthen-v0.2-4B-QLoRA-run1
wandb_log_model:
# Model checkpointing config
output_dir: ./Outputs/Gemma-3-Earthen-v0.2-4B-QLoRA-run1
save_steps: 10
save_safetensors: true
save_total_limit: 2
save_only_model: true
# Model architecture config
base_model: google/gemma-3-4b-it
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Mixed precision training config
bf16: true
fp16: false
tf32: false
# Model loading config
load_in_8bit: false
load_in_4bit: true
strict: false
# Sequence config
sequence_len: 8192
min_sample_len: 256
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
train_on_inputs: false
group_by_length: false
# LoRA adapter config
adapter: qlora
lora_model_dir:
lora_r: 256
lora_alpha: 256
lora_dropout: 0.125
lora_target_modules: 'language_model.model.layers.[\d]+.(mlp|cross_attn|self_attn).(up|down|gate|q|k|v|o)_proj'
embeddings_skip_upcast: true
# Dataset config
datasets:
# Completion
# Story-like Data
- path: BeaverAI/REDACTED1
split: train[:1000]
type: customcompletion-regex
- path: PJMixers-Dev/Lit-axo-Shuffled
split: train[:1000]
type: customcompletion-regex
- path: PJMixers-Dev/Mielikki_Erebus-87k-axo
split: train[:1000]
type: customcompletion-regex
- path: PJMixers/RyokoAI_Honeyfeed3600-Cleanish
split: train[:1000]
type: customcompletion-regex
- path: BeaverAI/REDACTED2
split: train[:1000]
type: customcompletion-regex
- path: PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo
split: train[:1000]
type: customcompletion-regex
- path: Nelathan/synthetic-sugar-quill
split: train[:1000]
type: customcompletion-regex
- path: PJMixers-Dev/winglian_visual-novels-json-axo
split: train[:1000]
type: customcompletion-regex
- path: BeaverAI/REDACTED3
split: train[:1000]
type: customcompletion-regex
- path: PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned
split: train[:1000]
type: customcompletion-regex
# Subtitle Data
- path: PJMixers-Dev/Subtitles
split: train[:1000]
type: customcompletion-regex
- path: PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo
split: train[:1000]
type: customcompletion-regex
# News Data
- path: PJMixers-Dev/Fundus-105K-Formatted
split: train[:1000]
type: customcompletion-regex
- path: PJMixers-Dev/Fundus-AP-News-Formatted
split: train[:1000]
type: customcompletion-regex
- path: PJMixers/AP-News-2024
split: train[:1000]
type: customcompletion-regex
# Misc Data
- path: PJMixers-Dev/goodwiki-2024-12-04-axo
split: train[:1000]
type: customcompletion-regex
- path: epfl-llm/guidelines
split: train[:1000]
field: clean_text
type: customcompletion-regex
# Gemma-3 Instruct
# RP Data
- path: PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed
type: customgemma3-regex
- path: allura-org/gryphe-sonnet-3.5-charcards-names-added
type: customgemma3-regex
- path: anthracite-org/c2_logs_32k_llama3_qwen2_v1.3
type: customgemma3-regex
- path: BeaverAI/REDACTED4
type: customgemma3-regex
- path: PJMixers-Dev/MinervaAI_Aesir-Preview-Anon
type: customgemma3-regex
- path: PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled
type: customgemma3-regex
- path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
type: customgemma3-regex
- path: PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT
type: customgemma3-regex
- path: PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT
type: customgemma3-regex
- path: grimulkan/aicg-logs-augmented
type: customgemma3-regex
- path: grimulkan/PIPPA-augmented-dedup
type: customgemma3-regex
- path: PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted
type: customgemma3-regex
# InstStory Data
- path: PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
type: customgemma3-regex
- path: Gryphe/ChatGPT-4o-Writing-Prompts
type: customgemma3-regex
- path: Gryphe/Opus-WritingPrompts
type: customgemma3-regex
- path: anthracite-org/nopm_claude_writing_fixed
type: customgemma3-regex
- path: PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT
type: customgemma3-regex
- path: allura-org/fujin-instruct-v2
type: customgemma3-regex
# Adventure Data
- path: PocketDoc/Dans-Prosemaxx-Adventure
type: customgemma3-regex
- path: PocketDoc/Dans-Failuremaxx-Adventure-3
type: customgemma3-regex
test_datasets:
val_set_size: 256
eval_strategy: steps
eval_steps: 10
dataset_prepared_path: ./00-Tokenized-Datasets/Gemma-3-Earthen-v0.2-4B-LoRA-seed42
shuffle_merged_datasets: true
dataset_processes:
# Training hyperparameters
num_epochs: 1
gradient_accumulation_steps: 4
micro_batch_size: 4
eval_batch_size: 4
warmup_steps: 0
optimizer: came_pytorch
optim_args:
enable_stochastic_rounding: true
enable_cautious: true
enable_8bit: true
lr_scheduler: rex
learning_rate: 2.5e-7
cosine_min_lr_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 0.5
logging_steps: 1
# Model optimization
gradient_checkpointing: offload
sdp_attention: true
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false
liger_fused_linear_cross_entropy: false
lora_mlp_kernel: false
lora_qkv_kernel: false
lora_o_kernel: false
# DeepSpeed
deepspeed:
# Garbage Collection
gc_steps:
# Debug config
debug: true
seed: 42
# Token config
special_tokens:
bos_token: "<bos>"
eos_token: "<eos>"
pad_token: "<pad>"
tokens:
```
## Citations
<details><summary>Show Citations</summary>
```bib
@misc{wolf2020huggingfacestransformersstateoftheartnatural,
title={HuggingFace's Transformers: State-of-the-art Natural Language Processing},
author={Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush},
year={2020},
eprint={1910.03771},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1910.03771},
}
@misc{gemmateam2025gemma3technicalreport,
title={Gemma 3 Technical Report},
author={Gemma Team and Aishwarya Kamath and Johan Ferret and Shreya Pathak and Nino Vieillard and Ramona Merhej and Sarah Perrin and Tatiana Matejovicova and Alexandre Ramé and Morgane Rivière and Louis Rouillard and Thomas Mesnard and Geoffrey Cideron and Jean-bastien Grill and Sabela Ramos and Edouard Yvinec and Michelle Casbon and Etienne Pot and Ivo Penchev and Gaël Liu and Francesco Visin and Kathleen Kenealy and Lucas Beyer and Xiaohai Zhai and Anton Tsitsulin and Robert Busa-Fekete and Alex Feng and Noveen Sachdeva and Benjamin Coleman and Yi Gao and Basil Mustafa and Iain Barr and Emilio Parisotto and David Tian and Matan Eyal and Colin Cherry and Jan-Thorsten Peter and Danila Sinopalnikov and Surya Bhupatiraju and Rishabh Agarwal and Mehran Kazemi and Dan Malkin and Ravin Kumar and David Vilar and Idan Brusilovsky and Jiaming Luo and Andreas Steiner and Abe Friesen and Abhanshu Sharma and Abheesht Sharma and Adi Mayrav Gilady and Adrian Goedeckemeyer and Alaa Saade and Alex Feng and Alexander Kolesnikov and Alexei Bendebury and Alvin Abdagic and Amit Vadi and András György and André Susano Pinto and Anil Das and Ankur Bapna and Antoine Miech and Antoine Yang and Antonia Paterson and Ashish Shenoy and Ayan Chakrabarti and Bilal Piot and Bo Wu and Bobak Shahriari and Bryce Petrini and Charlie Chen and Charline Le Lan and Christopher A. Choquette-Choo and CJ Carey and Cormac Brick and Daniel Deutsch and Danielle Eisenbud and Dee Cattle and Derek Cheng and Dimitris Paparas and Divyashree Shivakumar Sreepathihalli and Doug Reid and Dustin Tran and Dustin Zelle and Eric Noland and Erwin Huizenga and Eugene Kharitonov and Frederick Liu and Gagik Amirkhanyan and Glenn Cameron and Hadi Hashemi and Hanna Klimczak-Plucińska and Harman Singh and Harsh Mehta and Harshal Tushar Lehri and Hussein Hazimeh and Ian Ballantyne and Idan Szpektor and Ivan Nardini and Jean Pouget-Abadie and Jetha Chan and Joe Stanton and John Wieting and Jonathan Lai and Jordi Orbay and Joseph Fernandez and Josh Newlan and Ju-yeong Ji and Jyotinder Singh and Kat Black and Kathy Yu and Kevin Hui and Kiran Vodrahalli and Klaus Greff and Linhai Qiu and Marcella Valentine and Marina Coelho and Marvin Ritter and Matt Hoffman and Matthew Watson and Mayank Chaturvedi and Michael Moynihan and Min Ma and Nabila Babar and Natasha Noy and Nathan Byrd and Nick Roy and Nikola Momchev and Nilay Chauhan and Noveen Sachdeva and Oskar Bunyan and Pankil Botarda and Paul Caron and Paul Kishan Rubenstein and Phil Culliton and Philipp Schmid and Pier Giuseppe Sessa and Pingmei Xu and Piotr Stanczyk and Pouya Tafti and Rakesh Shivanna and Renjie Wu and Renke Pan and Reza Rokni and Rob Willoughby and Rohith Vallu and Ryan Mullins and Sammy Jerome and Sara Smoot and Sertan Girgin and Shariq Iqbal and Shashir Reddy and Shruti Sheth and Siim Põder and Sijal Bhatnagar and Sindhu Raghuram Panyam and Sivan Eiger and Susan Zhang and Tianqi Liu and Trevor Yacovone and Tyler Liechty and Uday Kalra and Utku Evci and Vedant Misra and Vincent Roseberry and Vlad Feinberg and Vlad Kolesnikov and Woohyun Han and Woosuk Kwon and Xi Chen and Yinlam Chow and Yuvein Zhu and Zichuan Wei and Zoltan Egyed and Victor Cotruta and Minh Giang and Phoebe Kirk and Anand Rao and Kat Black and Nabila Babar and Jessica Lo and Erica Moreira and Luiz Gustavo Martins and Omar Sanseviero and Lucas Gonzalez and Zach Gleicher and Tris Warkentin and Vahab Mirrokni and Evan Senter and Eli Collins and Joelle Barral and Zoubin Ghahramani and Raia Hadsell and Yossi Matias and D. Sculley and Slav Petrov and Noah Fiedel and Noam Shazeer and Oriol Vinyals and Jeff Dean and Demis Hassabis and Koray Kavukcuoglu and Clement Farabet and Elena Buchatskaya and Jean-Baptiste Alayrac and Rohan Anil and Dmitry and Lepikhin and Sebastian Borgeaud and Olivier Bachem and Armand Joulin and Alek Andreev and Cassidy Hardin and Robert Dadashi and Léonard Hussenot},
year={2025},
eprint={2503.19786},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.19786},
}
@misc{hu2021loralowrankadaptationlarge,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
year={2021},
eprint={2106.09685},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2106.09685},
}
@misc{dettmers2023qloraefficientfinetuningquantized,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Tim Dettmers and Artidoro Pagnoni and Ari Holtzman and Luke Zettlemoyer},
year={2023},
eprint={2305.14314},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2305.14314},
}
@misc{dao2023flashattention2fasterattentionbetter,
title={FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning},
author={Tri Dao},
year={2023},
eprint={2307.08691},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2307.08691},
}
@misc{hsu2024ligerkernelefficienttriton,
title={Liger Kernel: Efficient Triton Kernels for LLM Training},
author={Pin-Lun Hsu and Yun Dai and Vignesh Kothapalli and Qingquan Song and Shao Tang and Siyu Zhu and Steven Shimizu and Shivam Sahni and Haowen Ning and Yanning Chen},
year={2024},
eprint={2410.10989},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2410.10989},
}
@misc{wijmans2025cutlosseslargevocabularylanguage,
title={Cut Your Losses in Large-Vocabulary Language Models},
author={Erik Wijmans and Brody Huval and Alexander Hertzberg and Vladlen Koltun and Philipp Krähenbühl},
year={2025},
eprint={2411.09009},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2411.09009},
}
@misc{chen2021rexrevisitingbudgetedtraining,
title={REX: Revisiting Budgeted Training with an Improved Schedule},
author={John Chen and Cameron Wolfe and Anastasios Kyrillidis},
year={2021},
eprint={2107.04197},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2107.04197},
}
@misc{luo2023cameconfidenceguidedadaptivememory,
title={CAME: Confidence-guided Adaptive Memory Efficient Optimization},
author={Yang Luo and Xiaozhe Ren and Zangwei Zheng and Zhuo Jiang and Xin Jiang and Yang You},
year={2023},
eprint={2307.02047},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2307.02047},
}
@misc{zamirai2021revisitingbfloat16training,
title={Revisiting BFloat16 Training},
author={Pedram Zamirai and Jian Zhang and Christopher R. Aberger and Christopher De Sa},
year={2021},
eprint={2010.06192},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2010.06192},
}
@misc{liang2025cautiousoptimizersimprovingtraining,
title={Cautious Optimizers: Improving Training with One Line of Code},
author={Kaizhao Liang and Lizhang Chen and Bo Liu and Qiang Liu},
year={2025},
eprint={2411.16085},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2411.16085},
}
@misc{xie2025sana15efficientscaling,
title={SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer},
author={Enze Xie and Junsong Chen and Yuyang Zhao and Jincheng Yu and Ligeng Zhu and Chengyue Wu and Yujun Lin and Zhekai Zhang and Muyang Li and Junyu Chen and Han Cai and Bingchen Liu and Daquan Zhou and Song Han},
year={2025},
eprint={2501.18427},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.18427},
}
@misc{dallabetta2024fundussimpletousenewsscraper,
title={Fundus: A Simple-to-Use News Scraper Optimized for High Quality Extractions},
author={Max Dallabetta and Conrad Dobberstein and Adrian Breiding and Alan Akbik},
year={2024},
eprint={2403.15279},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2403.15279},
}
@misc{gosling2023pippapartiallysyntheticconversational,
title={PIPPA: A Partially Synthetic Conversational Dataset},
author={Tear Gosling and Alpin Dale and Yinhe Zheng},
year={2023},
eprint={2308.05884},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2308.05884},
}
```
</details>
|
AbdelilahFdg/darija-chat1 | AbdelilahFdg | 2025-05-22T10:58:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-22T10:57:51Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AbdelilahFdg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thucdangvan020999/ultravox_test_14 | thucdangvan020999 | 2025-05-22T10:55:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"ultravox",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
]
| feature-extraction | 2025-05-22T10:55:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kureha295/ortho_model_baseline | kureha295 | 2025-05-22T10:52:47Z | 5 | 0 | null | [
"safetensors",
"llama",
"license:mit",
"region:us"
]
| null | 2025-05-15T10:48:58Z | ---
license: mit
---
This model has been generated by taking the last 3 tokens from the prompt with chat template. This corresponds to the activations of "<|Assistant|>\<think\>\\n". |
meimmo/trained-flux-lora-dior | meimmo | 2025-05-22T10:51:51Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:other",
"region:us"
]
| text-to-image | 2025-05-22T08:35:56Z | ---
base_model: black-forest-labs/FLUX.1-schnell
library_name: diffusers
license: other
instance_prompt: a photo of dress in Dior style by John Galliano from the years 1997
to 2011
widget:
- text: a photo of shoes in Dior style by John Galliano from the years 1997 to 2011
output:
url: image_0.png
- text: a photo of shoes in Dior style by John Galliano from the years 1997 to 2011
output:
url: image_1.png
- text: a photo of shoes in Dior style by John Galliano from the years 1997 to 2011
output:
url: image_2.png
- text: a photo of shoes in Dior style by John Galliano from the years 1997 to 2011
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - meimmo/trained-flux-lora-dior
<Gallery />
## Model description
These are meimmo/trained-flux-lora-dior DreamBooth LoRA weights for black-forest-labs/FLUX.1-schnell.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of dress in Dior style by John Galliano from the years 1997 to 2011` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](meimmo/trained-flux-lora-dior/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('meimmo/trained-flux-lora-dior', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a photo of shoes in Dior style by John Galliano from the years 1997 to 2011').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lucylyn/MSD-Qwen2VL-7B-Instruct | lucylyn | 2025-05-22T06:26:40Z | 0 | 0 | null | [
"pytorch",
"qwen2_vl",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-22T05:44:42Z | ---
license: apache-2.0
---
|
ostinborinvz/cvbxcvb | ostinborinvz | 2025-05-22T06:26:02Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2025-05-22T06:26:02Z | ---
license: bigscience-bloom-rail-1.0
---
|
DanielNRU/pollen-ner2-1800 | DanielNRU | 2025-05-22T06:19:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
]
| null | 2025-05-22T06:13:59Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-1800
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-1800
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1738
- Precision: 0.8180
- Recall: 0.8936
- F1: 0.8541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 225 | 0.1737 | 0.8231 | 0.8876 | 0.8541 |
| No log | 2.0 | 450 | 0.1741 | 0.8204 | 0.8896 | 0.8536 |
| 0.2587 | 3.0 | 675 | 0.1738 | 0.8180 | 0.8936 | 0.8541 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
FL-PoC/bart-safe-AQSOL-seed-1 | FL-PoC | 2025-05-22T06:16:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-22T06:16:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lefantom00/viet-llama2-iSMART | lefantom00 | 2025-05-22T06:15:15Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"vi",
"base_model:infCapital/viet-llama2-ft",
"base_model:quantized:infCapital/viet-llama2-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T03:48:58Z | ---
base_model: infCapital/viet-llama2-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- vi
--- |
tarsur909/gpt2-large-imdb-ppo-1ep-25p-v3 | tarsur909 | 2025-05-22T06:07:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T06:07:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ajmalmahmood/LunarLander-v2 | ajmalmahmood | 2025-05-22T06:01:08Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-22T05:34:02Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -159.69 +/- 130.26
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ajmalmahmood/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
KSJcompany/llama3.2-1b-cas4133-assignment2-update1 | KSJcompany | 2025-05-22T05:57:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T05:53:54Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SaoSamarth/openai-whisper-large-v2-Khmer-dynamo-one | SaoSamarth | 2025-05-22T05:56:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T05:56:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lenerkasseos/zxcvx | lenerkasseos | 2025-05-22T05:54:21Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
]
| null | 2025-05-22T05:54:21Z | ---
license: bigcode-openrail-m
---
|
sherryzju/sd-class-butterflies-32 | sherryzju | 2025-05-22T05:52:14Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2025-05-22T05:52:01Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('sherryzju/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_Adult_ep4_22 | MinaMila | 2025-05-22T05:44:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T05:44:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik87/a992198a-6075-421d-aaa3-e47ec8d4eea6 | dimasik87 | 2025-05-22T05:43:12Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-22T05:29:02Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a992198a-6075-421d-aaa3-e47ec8d4eea6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- e361aff1915418df_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dimasik87/a992198a-6075-421d-aaa3-e47ec8d4eea6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.5e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/e361aff1915418df_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 422ab872-3e11-4c2a-81ea-7bc9361000c1
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 422ab872-3e11-4c2a-81ea-7bc9361000c1
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# a992198a-6075-421d-aaa3-e47ec8d4eea6
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.765 | 0.0379 | 250 | 2.3420 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DanielNRU/pollen-ner2-1350 | DanielNRU | 2025-05-22T05:13:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
]
| null | 2025-05-22T05:05:54Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-1350
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-1350
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1532
- Precision: 0.8377
- Recall: 0.9016
- F1: 0.8685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 169 | 0.1732 | 0.8141 | 0.9056 | 0.8574 |
| No log | 2.0 | 338 | 0.1638 | 0.8272 | 0.9036 | 0.8637 |
| 0.3331 | 3.0 | 507 | 0.1532 | 0.8377 | 0.9016 | 0.8685 |
| 0.3331 | 4.0 | 676 | 0.1514 | 0.8402 | 0.8976 | 0.8680 |
| 0.3331 | 5.0 | 845 | 0.1584 | 0.8349 | 0.9036 | 0.8679 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
AFZAL0008/nanoVLM | AFZAL0008 | 2025-05-22T05:10:02Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
]
| image-text-to-text | 2025-05-22T05:09:20Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("AFZAL0008/nanoVLM")
```
|
AmazingCycleStar/q-Taxi-v3 | AmazingCycleStar | 2025-05-22T05:06:38Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-22T04:59:14Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AmazingCycleStar/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits