Search is not available for this dataset
modelId
stringlengths
5
134
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
223M
likes
int64
0
10.1k
library_name
stringclasses
383 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
53 values
createdAt
unknown
card
stringlengths
11
1.01M
splendor1811/gemma-2-2B-it-thinking_FC
splendor1811
"2025-03-02T18:21:00Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2-2b-it", "base_model:finetune:google/gemma-2-2b-it", "endpoints_compatible", "region:us" ]
null
"2025-03-02T18:18:29Z"
--- base_model: google/gemma-2-2b-it library_name: transformers model_name: gemma-2-2B-it-thinking_FC tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-2-2B-it-thinking_FC This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="splendor1811/gemma-2-2B-it-thinking_FC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/splendor1811cmc/gemma-2b-thinking/runs/fuy9bhc0) This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aadhistii/indonesian-roberta-base-sentiment-classifier-twt-sentiment-analysis-tsel
aadhistii
"2024-07-10T04:52:47Z"
63
0
transformers
[ "transformers", "tf", "tensorboard", "roberta", "text-classification", "generated_from_keras_callback", "base_model:w11wo/indonesian-roberta-base-sentiment-classifier", "base_model:finetune:w11wo/indonesian-roberta-base-sentiment-classifier", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-07-10T04:45:04Z"
--- license: mit base_model: w11wo/indonesian-roberta-base-sentiment-classifier tags: - generated_from_keras_callback model-index: - name: aadhistii/indonesian-roberta-base-sentiment-classifier-twt-sentiment-analysis-tsel results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aadhistii/indonesian-roberta-base-sentiment-classifier-twt-sentiment-analysis-tsel This model is a fine-tuned version of [w11wo/indonesian-roberta-base-sentiment-classifier](https://huggingface.co/w11wo/indonesian-roberta-base-sentiment-classifier) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2241 - Validation Loss: 0.7091 - Train Accuracy: 0.7260 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 365, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.8603 | 0.7168 | 0.6952 | 0 | | 0.5596 | 0.6218 | 0.7397 | 1 | | 0.4117 | 0.6988 | 0.7123 | 2 | | 0.2847 | 0.6871 | 0.7329 | 3 | | 0.2241 | 0.7091 | 0.7260 | 4 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
WizWhite/gustav-vasa-1496-1560-historical-king-of-sweden
WizWhite
"2024-02-07T00:08:27Z"
33
3
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "character", "royalty", "historical", "king", "sweden", "swedish", "house of vasa", "vasa", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
"2024-02-07T00:08:25Z"
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=False tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - character - royalty - historical - king - sweden - swedish - house of vasa - vasa base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: GVASA widget: - text: 'hyperrealistic portrait photograph of a laughing (gvasa:1.5) dressed in a santa outfit, shot on sony a7, studio lighting' output: url: >- 4712224.jpeg - text: 'The cover of the vintage comic book (titled "THE DANE SLAYER":1.2) features an illustration of GVASA riding a giant white (moose), by michael whelan and gerald brom' output: url: >- 4712162.jpeg - text: 'hyperrealistic action photograph of (GVASA:1.4) Riding public transport, shot on sony a7, 50mm, highly detailed, filmic, soft lighting, insane detail, atmospheric, subsurface skin shading,' output: url: >- 4712165.jpeg - text: 'hyperrealistic action photograph of (GVASA GVASA:1.3) Setting the dinner table, shot on sony a7, 50mm, highly detailed, filmic, soft lighting, insane detail, atmospheric, subsurface skin shading,' output: url: >- 4712166.jpeg - text: 'hyperrealistic action photograph of (GVASA wearing a hat:1.4) walking a corgi in stockholm old town gamla stan, shot on sony a7, 50mm, highly detailed, natural lighting,' output: url: >- 4712167.jpeg - text: 'sven nordqvist style (GVASA:1.3) hiding in a haystack' output: url: >- 4712238.jpeg - text: 'vintage safety matches, (title logo "REFORMATION":1.5), (GVASA:1.3) wearing a hat, "made by gustav vasa"' output: url: >- 4712408.jpeg - text: 'gvasa in a felt puppet world style' output: url: >- 4712428.jpeg - text: 'breathtaking photo of gvasa (gvasa:1.4) dressed as a cyberpunk, urban neon tokyo background at night, long beard and pageboy hair' output: url: >- 4712579.jpeg - text: ' ' output: url: >- 4713481.jpeg --- # Gustav Vasa (1496 – 1560) – Historical king of Sweden <Gallery /> ## Model description <h3 id="heading-232">Gustav Vasa – Founding Father of the Swedish Nation <strong><span style="color:rgb(250, 176, 5)">✝</span></strong></h3><p><strong>Gustav I</strong> (born Gustav Eriksson of the Vasa noble family, (12 May 1496 – 29 September 1560), commonly känd som <strong>Gustav Vasa</strong>, was King of Sweden från 1523 until his död in 1560. Led Sweden's war för independence against Denmark.</p><p><strong>Triggerword: </strong><u>GVASA</u></p><p>Trained on portrait paintings, engravings and statues – but fully capable of generating photo-like images, illustrations, comic style etcetera. </p><p></p><p><strong>Notes:</strong></p><p>• Biased towards a slightly sideway facing portrait with a black coat.</p><p>• Longer prompt tend to lose some resemblance – you can combat this by increasing the LoRA weight, adding some more weight to the trigger word, or just duplicating the trigger word like so: <strong><em><u>(GVasa GVasa:1.3)</u></em></strong>. You can also add words like "large beard" or "pageboy hair" to see if it helps.</p> ## Trigger words You should use `GVASA`, `wearing a hat` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/WizWhite/gustav-vasa-1496-1560-historical-king-of-sweden/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('WizWhite/gustav-vasa-1496-1560-historical-king-of-sweden', weight_name='GUSTAV_VASA-000002.safetensors') image = pipeline('`GVASA`, `wearing a hat`').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Bianca-Censori-Full-X/VIRAL.Bianca-Censori.Viral.Video.Full.Original.Video.Social.Media.X
Bianca-Censori-Full-X
"2025-02-21T19:34:56Z"
0
0
null
[ "region:us" ]
null
"2025-02-21T19:34:49Z"
<a href="https://t.co/Fq4Vc9o7uA">**►►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ►► 𝗟𝗲𝗮𝗸𝗲𝗱 𝗩𝗶𝗱𝗲𝗼 𝗙𝘂𝗹𝗹 ⬇️⬇️**​</a> <a href="https://t.co/Fq4Vc9o7uA">**►►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ►► 𝗟𝗲𝗮𝗸𝗲𝗱 𝗩𝗶𝗱𝗲𝗼 𝗙𝘂𝗹𝗹 ⬇️⬇️**​</a> <a href="https://t.co/Fq4Vc9o7uA">**►►✅𝗢𝗙𝗙𝗜𝗖𝗜𝗔𝗟 𝗪𝗘𝗕𝗦𝗜𝗧𝗘 𝟭 ⬇️⬇️​**</a> <p><a rel="nofollow" title="WATCH NOW" href="https://t.co/Fq4Vc9o7uA"><img border="no" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Hachipo/qwen2.5-0.5B_educational_instruct_top1000
Hachipo
"2024-12-18T08:27:09Z"
186
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-18T08:26:04Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
irishprancer/7ebcedd3-3e6e-4cf0-8e80-5b51439402b6
irishprancer
"2025-02-27T21:59:10Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-02-27T21:15:45Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
midreal/hunyuan_sft_poc
midreal
"2025-02-27T02:29:30Z"
0
0
null
[ "hunyuan", "hunyuan-video", "hunyuan-lora", "lora", "replicate", "text-to-video", "en", "base_model:tencent/HunyuanVideo", "base_model:adapter:tencent/HunyuanVideo", "license:other", "region:us" ]
text-to-video
"2025-02-27T00:57:11Z"
--- license: other license_name: tencent-hunyuan-community license_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE language: - en tags: - hunyuan - hunyuan-video - hunyuan-lora - lora - replicate base_model: "tencent/HunyuanVideo" pipeline_tag: text-to-video # widget: # - text: >- # prompt # output: # url: https://... --- # Hunyuan_Sft_Poc <Gallery /> Trained on Replicate using: https://replicate.com/zsxkib/hunyuan-video-lora/train
Shahrukh0/attnprn-llama-3b-10L
Shahrukh0
"2025-02-25T15:46:27Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-25T15:43:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sofiaoliveira/dqn-SpaceInvadersNoFrameskip-v4
sofiaoliveira
"2022-08-10T11:51:57Z"
7
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2022-08-10T11:04:18Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 340.50 +/- 183.40 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sofiaoliveira -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sofiaoliveira ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 10000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.05), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 1000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
hcho22/opus-mt-ko-en-finetuned-en-to-kr
hcho22
"2022-11-15T05:03:46Z"
83
1
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-11-10T03:37:03Z"
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: hcho22/opus-mt-ko-en-finetuned-en-to-kr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # hcho22/opus-mt-ko-en-finetuned-en-to-kr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5856 - Validation Loss: 2.0437 - Train Bleu: 2.0518 - Train Gen Len: 20.8110 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:-----:| | 2.5856 | 2.0437 | 2.0518 | 20.8110 | 0 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
bschwab/llama3-8b-alpaca-cleaned
bschwab
"2024-06-06T17:38:11Z"
2
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-06T17:24:24Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** bschwab - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
cgullu/falcon-7b-finetune
cgullu
"2023-08-26T13:29:30Z"
0
0
null
[ "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:finetune:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
"2023-08-26T13:27:56Z"
--- base_model: ybelkada/falcon-7b-sharded-bf16 tags: - generated_from_trainer model-index: - name: falcon-7b-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-7b-finetune This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 320 ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
M4rt0no/Tokenizer-Priorizacion-v2_batch32
M4rt0no
"2025-03-12T14:53:18Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-12T14:53:16Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nhunglaaaaaaa/fd59c4c5-1361-4309-8a8c-5de5233132b5
nhunglaaaaaaa
"2025-02-03T02:12:48Z"
9
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "license:mit", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-02-03T02:03:53Z"
--- library_name: peft license: mit base_model: microsoft/Phi-3.5-mini-instruct tags: - axolotl - generated_from_trainer model-index: - name: fd59c4c5-1361-4309-8a8c-5de5233132b5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: microsoft/Phi-3.5-mini-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 317f4ddd714657a5_train_data.json ds_type: json format: custom path: /workspace/input_data/317f4ddd714657a5_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhunglaaaaaaa/fd59c4c5-1361-4309-8a8c-5de5233132b5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/317f4ddd714657a5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 02b7ce2c-99ab-481c-acb1-095a30491b63 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 02b7ce2c-99ab-481c-acb1-095a30491b63 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # fd59c4c5-1361-4309-8a8c-5de5233132b5 This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8720 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5725 | 0.5380 | 200 | 0.8720 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
yashasroy/llama381binstruct_summarize_short_merged
yashasroy
"2024-11-04T16:09:16Z"
81
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-11-04T16:06:04Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
malhajar/Shahin-v0.1
malhajar
"2024-12-09T09:56:33Z"
15
6
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "ar", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-11T13:12:34Z"
--- library_name: transformers language: - ar --- <div style="text-align: right; direction: rtl;"> <img src="https://huggingface.co/malhajar/Shahin-v0.1/resolve/90e48b33b4b9f095665a9cc3e822283f2134568c/d125b8078db86fa7a2c3dc1d7cc3a125835680ec1d23fe94e6866083.jpg" alt="Shahin-v0.1" width="400"/> # شاهين 0.1 (Scroll Down For English) **شاهين 0.1** هو نموذج لغة ضخم (LLM) مصمم خصيصًا للهجة السورية، تكريمًا لصمود وروح الشعب السوري. هذا النموذج يمثل منارة للحرية والتقدم، وتم تطويره تكريمًا لانتصارهم بعد 70 عامًا من الدكتاتورية. يتميز بطلاقة فائقة في اللهجة السورية ويبرع في مجموعة واسعة من المهام، من إنشاء الحوارات إلى تقديم رؤى ثقافية وتحليل تاريخي والمزيد. ## تفاصيل النموذج - **النموذج الأساسي**: بنية مخصصة مع 14 مليار معامل، محسنة للهجة السورية - **اللغات المدعومة**: اللهجة السورية بشكل رئيسي، مع دعم أساسي للعربية الفصحى - **بيانات التدريب**: مجموعة بيانات شاملة من اللهجة السورية، بما في ذلك اللغة المحكية، الأدب، الأرشيفات التاريخية، والمحتوى الذي أنشأه المستخدمون - **الأجهزة والتدريب**: تدريب مكثف باستخدام 4 وحدات معالجة رسومات A100 لمدة 3 أيام مع مجموعات بيانات متخصصة ## الميزات الرئيسية - **إتقان اللهجة السورية**: مصمم لالتقاط التفاصيل الدقيقة للهجة السورية، بما في ذلك الاختلافات الإقليمية والتعبيرات الاصطلاحية. - **الحساسية الثقافية**: مدرب على بيانات تعكس التراث والعادات والتاريخ السوري لتقديم استجابات واعية ثقافيًا. - **تطبيقات واسعة**: يبرع في الذكاء الاصطناعي للمحادثة، إنشاء المحتوى، التحليل التاريخي، والمهام التعليمية. </div> # English_Discription **Shahin-v0.1** is a large language model (LLM) created specifically for the Syrian Arabic dialect, designed as a tribute to the resilience and spirit of the Syrian people. This model is a beacon of freedom and progress, developed in honor of their victory against 70 years of dictatorship. It offers unparalleled fluency in Syrian Arabic and excels in a wide range of tasks, from dialogue generation to cultural insights, history, and more. ## Model Details - **Model Base**: Custom architecture with 14 billion parameters, optimized for Syrian Arabic - **Languages Supported**: Primarily Syrian Arabic, with basic support for Modern Standard Arabic (MSA) - **Training Data**: A comprehensive corpus of Syrian Arabic, including spoken language, literature, historical archives, and user-generated content - **Hardware & Training**: Fine-tuned on 4 NVIDIA A100 GPUs over 3 days with specialized datasets ## Key Features - **Syrian Dialect Mastery**: Designed to capture the nuances of Syrian Arabic, including regional variations and idiomatic expressions. - **Cultural Sensitivity**: Trained on data reflecting Syrian heritage, customs, and history to generate culturally aware responses. - **Wide Application**: Excels in conversational AI, content creation, historical analysis, and educational tasks. ## Example Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" model_name = "malhajar/Shahin-v0.1" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "شو الأسباب يلي خلت الثورة السورية تستمر رغم كل الظروف؟" messages = [ { "role": "system", "content": "أنت مساعد افتراضي سوري، جاهز للإجابة على الأسئلة بكل وضوح وصدق." }, { "role": "user", "content": prompt } ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, temperature=0.7, top_k=40, top_p=0.85, max_new_tokens=256, repetition_penalty=1.1, ) response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Example Outputs ### Example 1 **Prompt:** ``` شو الأسباب يلي خلت الثورة السورية تستمر رغم كل الظروف؟ ``` **Response:** الثورة السورية استمرت بفضل إصرار الشعب السوري على الحرية والكرامة. الشعب ما تخلى عن مطالبه رغم القمع العنيف والظروف الصعبة، وكان في دعم كبير من المغتربين السوريين . الإرادة الحقيقية لتغيير الواقع المظلم كانت أقوى من أي تحدي. ### Example 2 **Prompt:** ``` احكيلي عن تاريخ قلعة حلب. ``` **Response:** قلعة حلب هي واحدة من أهم المعالم التاريخية بسوريا، عمرها يرجع لأكثر من 4,000 سنة. كانت مركزاً للدفاع عن المدينة ومرت عليها حضارات عديدة مثل الحثيين، الرومان، والأيوبيين. اليوم، هي رمز للصمود السوري ومرآة لتاريخ الشعب العريق. --- Shahin-v0.1 showcases the beauty and strength of Syrian Arabic and stands as a technological milestone for the Syrian people. ---
OhaymakingO/10-hey-58-28011049
OhaymakingO
"2025-01-28T07:51:55Z"
32
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "text-to-speech", "annotation", "en", "dataset:ylacombe/jenny-tts-tagged-v1", "dataset:reach-vb/jenny_tts_dataset", "arxiv:2402.01912", "autotrain_compatible", "region:us" ]
text-to-speech
"2025-01-28T07:49:50Z"
--- library_name: transformers tags: - text-to-speech - annotation language: - en pipeline_tag: text-to-speech inference: false datasets: - ylacombe/jenny-tts-tagged-v1 - reach-vb/jenny_tts_dataset --- <img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Parler-TTS Mini v1 - Jenny <a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts_mini"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> * **Fine-tuning guide on Colab:** <a target="_blank" href="https://github.com/ylacombe/scripts_and_notebooks/blob/main/Finetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Fine-tuned version of **Parler-TTS Mini v1** on the [30-hours single-speaker high-quality Jenny (she's Irish ☘️) dataset](https://github.com/dioco-group/jenny-tts-dataset), suitable for training a TTS model. Usage is more or less the same as Parler-TTS v1, just specify they keyword “Jenny” in the voice description: ## Usage ```sh pip install git+https://github.com/huggingface/parler-tts.git ``` You can then use the model with the following inference snippet: ```py import torch from parler_tts import ParlerTTSForConditionalGeneration from transformers import AutoTokenizer import soundfile as sf device = "cuda:0" if torch.cuda.is_available() else "cpu" model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-mini-v1-jenny").to(device) tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-mini-v1-jenny") prompt = "Hey, how are you doing today? My name is Jenny, and I'm here to help you with any questions you have." description = "Jenny speaks at an average pace with an animated delivery in a very confined sounding environment with clear audio quality." input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device) prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids) audio_arr = generation.cpu().numpy().squeeze() sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate) ``` ## Citation If you found this repository useful, please consider citing this work and also the original Stability AI paper: ``` @misc{lacombe-etal-2024-parler-tts, author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi}, title = {Parler-TTS}, year = {2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/parler-tts}} } ``` ``` @misc{lyth2024natural, title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations}, author={Dan Lyth and Simon King}, year={2024}, eprint={2402.01912}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` ## License License - Attribution is required in software/websites/projects/interfaces (including voice interfaces) that generate audio in response to user action using this dataset. Atribution means: the voice must be referred to as "Jenny", and where at all practical, "Jenny (Dioco)". Attribution is not required when distributing the generated clips (although welcome). Commercial use is permitted. Don't do unfair things like claim the dataset is your own. No further restrictions apply.
Anujgr8/w2v-bert-Marathi-large
Anujgr8
"2024-06-08T12:03:08Z"
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-08T09:16:16Z"
--- license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: w2v-bert-Marathi-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-Marathi-large This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2714 - Wer: 0.1698 - Cer: 0.0531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | 2.8852 | 0.5882 | 300 | 0.7826 | 0.4911 | 0.1647 | | 0.6243 | 1.1765 | 600 | 0.6280 | 0.3920 | 0.1351 | | 0.4901 | 1.7647 | 900 | 0.4369 | 0.3101 | 0.0986 | | 0.355 | 2.3529 | 1200 | 0.3922 | 0.2658 | 0.0849 | | 0.2943 | 2.9412 | 1500 | 0.3400 | 0.2371 | 0.0753 | | 0.2177 | 3.5294 | 1800 | 0.3041 | 0.2080 | 0.0646 | | 0.1779 | 4.1176 | 2100 | 0.2906 | 0.1954 | 0.0608 | | 0.1299 | 4.7059 | 2400 | 0.2904 | 0.1779 | 0.0560 | | 0.0929 | 5.2941 | 2700 | 0.2885 | 0.1727 | 0.0537 | | 0.0729 | 5.8824 | 3000 | 0.2714 | 0.1698 | 0.0531 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
mnlp-2024/dpo
mnlp-2024
"2024-06-04T23:46:38Z"
106
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "trl", "dpo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-01T12:24:36Z"
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gingun18/my_awesome_opus_books_model
gingun18
"2024-02-23T07:44:41Z"
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-02-23T06:58:20Z"
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6068 - Bleu: 5.6691 - Gen Len: 17.5871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8553 | 1.0 | 6355 | 1.6304 | 5.5031 | 17.602 | | 1.8202 | 2.0 | 12710 | 1.6068 | 5.6691 | 17.5871 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
seglinglin/Historical-Illustration-Extraction
seglinglin
"2025-02-25T15:31:55Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-09-20T12:15:10Z"
--- license: mit --- # Model for visual element extraction in historical document
shibajustfor/afe63fbb-7bf0-4a9d-8276-f011dd6c479f
shibajustfor
"2025-02-17T00:12:08Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-02-16T23:50:20Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: afe63fbb-7bf0-4a9d-8276-f011dd6c479f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afe63fbb-7bf0-4a9d-8276-f011dd6c479f This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1040 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lmqg/mt5-base-dequad-ae-trimmed-50000
lmqg
"2023-11-08T05:13:15Z"
3
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-11-08T04:52:53Z"
# Vocabulary Trimmed [lmqg/mt5-base-dequad-ae](https://huggingface.co/lmqg/mt5-base-dequad-ae): `lmqg/mt5-base-dequad-ae-trimmed-50000` This model is a trimmed version of [lmqg/mt5-base-dequad-ae](https://huggingface.co/lmqg/mt5-base-dequad-ae) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-base-dequad-ae | lmqg/mt5-base-dequad-ae-trimmed-50000 | |:---------------------------|:--------------------------|:----------------------------------------| | parameter_size_full | 582,384,384 | 275,032,320 | | parameter_size_embedding | 384,155,136 | 76,803,072 | | vocab_size | 250,101 | 50,002 | | compression_rate_full | 100.0 | 47.23 | | compression_rate_embedding | 100.0 | 19.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | de | vocabtrimmer/mc4_validation | text | de | validation | 50000 | 2 |
joshswartz/model_llama_mle
joshswartz
"2023-11-07T20:45:39Z"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
"2023-11-07T19:10:03Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0
michaelpstanich/SpiritMix-Rival
michaelpstanich
"2023-12-22T05:37:41Z"
0
0
null
[ "anime", "2D", "en", "license:creativeml-openrail-m", "region:us" ]
null
"2023-10-25T09:58:35Z"
--- license: creativeml-openrail-m language: - en tags: - anime - 2D --- The world of SpiritMix has opened once again! For this entry we have a bit of a weird one, SpiritMix - Rival! Rival is a spin-off of SpiritMix - Deep Line which starts with the same base but alters the weights and adds some along the way to create a checkpoint with similar concepts but has the opposite goals and visual style. Unlike Deep Line, this merge focuses on supporting complex prompts, complex upscaling, and giving a more 'general' detailed anime vibe the AI generation scene has grown use to. Rival unfortunately suffers from some common issues I'm not a fan of so I do plan to revisit this style of Checkpoint in the future (most likely updating this one), but even still, this checkpoint offers some uniqueness into my currently released Mix list. Make sure to share your results! Note 1 : I'm releasing this checkpoint as is to see what people think of it, however I'm personally still not happy with how it turned out so I will be returning to release either an update or newer version when I get the mix done right. Note 2 : Since this model relies heavily on upscaling to produce good results it may be more difficult to use than other checkpoints. < ----- ----- ----- > Discord - The Broken Chatbox - https://discord.gg/h3vB7S4FEw Donations! =^-^= - https://ko-fi.com/michaelpstanich < ----- ----- ----- > Model Name : SpiritMix - Rival (2D Anime) [hash:b77916be18] Experimental Name : _SpiritMerge4_0_5_0_4_0_3.safetensors [hash:c0a58902f3] (Some of my gens may have the experimental name/hash in the data, however it's the same model just without the baked VAE) Base Model : SD1.5 (Compatible with SD1.5 models and tools, may work with some SD2.x models) Focus : Create an opposite to SpiritMix - Deep Line focused on complex upscaling potential and lots of macro detail. SFW / NSFW : This model does well with SFW content but can lean lewd within certain prompts and contexts, does nudity well and NSFW works with the right prompts but may be inconsistent. Not considered safe to use in public! Bias : Tends to make characters a bit tall, often favors larger body sizes NSFW tends to overexaggerate quite a bit Many other 'typical anime' bias in terms of styling and body form/proportions/anatomy Defaults to lighter skin tones, doesn't always obey skin color related prompts < ----- ----- ----- > Quick NSFW Warning : This model will often slide into more lewd territory with certain keywords. Ensure you're careful when designing your prompts, include nudity and nsfw in the negatives if you're strictly looking for sfw content and include the appropriate body prompts when making nsfw content. < ----- ----- ----- > So what is SpiritMix - Rival (2D Anime)? Unlike my other mixes which start with a concept and have their own unique recipe, the current release of Rival is an abandoned branch of SpiritMix - Deep Line. The idea was to take the same recipe, modify the weights, and change a few checkpoints in the mix to create the 'opposite' model. While mixing I decided to try and make the model designed to be used with complex upscaling technics with a lot of potential for macro detail. This was accomplished by adding painterly and detailed models while removing and lowering the weight on more stylized checkpoints. I plan to return to this mix in the future with a completely unique recipe in the future to accomplish these goals without the compromizes seen in this initial release. < ----- ----- ----- > !!! IMPORTANT !!! Prompting Assistance and Usage Information : Recommended Settings : Sampler : DPM++ 2M SDE Exponential or DPM++ 2M SDE Karras recommended Size : 512x512, 512x768, 768x512, 768x768 (additional upscale strongly recommended) Steps : Fairly high, 40+40 for basic gens, 50+50 for complex upscaling at least Hires fix / Latent upscale : (Safe) R-ESRGAN 4x+ with 0.34-0.36 Denoise (For complex upscales read upscaling section) CFG Scale : 7-12, typically stay around 8.5-9 for most gens Clip-Skip : 2 recommended but not required Prompts : You can use this model however you choose, however I specifically tried to mix with latent upscaling in mind. This does mean the results are somewhat inconsistent, but with the right prompt and some good post work the results can be quite pleasing. Prompting should include quality tags, the set I used was "masterpiece, high quality, highly detailed, detailed background" though you can remove masterpiece to reduce the painterly look of the model. high quality and best quality tended to have the same effect, but I didn't test it enough to see if Best Quality is actually better or not. When prompting body sizes you may need to emphasize certain keywords to have the intended effect, especially if you're targeting smaller characters (the model prefers tall, thick woman in most cases) When creating smaller characters emphasize "girl" and don't include "female" as female weights a larger size. Upscaling : This is where things get tricky and require some fiddling, but I'll try and set some guidelines. For "safe" upscaling to reproduce the simpler style of the base model, use R-ESRGAN 4x+ with 0.34-0.36 denoise. This will keep your gen looking very similar to the original while cleaning up artifacts and adding details to unresolved areas. However, if you want a complex and detailed image you'll want to play with either Latent upscalers or other complex custom upscaler models. In Automatic1111 I used hires fix with the latent at 0.55-0.62 denoise or latent (nearest) at 0.62 denoise for the best results. This upscale is very heavy on RNG, especially for hands, and you may need to switch between them and alter your denoise and adding/removing a single step from the upscale to try and improve the upscale. Trial and error is the name of the game here but you can also employ post processing methods like inpainting. (ControlNet helps to a certain degree, you can make the base gen then use that gen into controlNet to then run the gen again to improve your results, however depending on the artifacts this may not always help) Colors / VAE : When using certain style tags the baked in vae may oversaturate or overexpose the image, to fix this you can force another VAE with softer colors or use a lora to reduce brightness of the gen itself. What this mix does well : Creates a lot of detail and allows latent upscalers to run rampent with details Maintains an anime aesthetic through any concept thrown at it Allows a wide range of 'moods' and 'tones' through color grading and variance Supports stylized keywords like "retro" for retro anime styles and coloring Most common loras work really well Responds really well to control-style tools like ControlNet or ADetailer What this mix struggles with (aside from typical AI issues) : Default gens without latent upscale may look a bit melty or unrefined at times Tends to generate tall and large female characters, in some cases proportions will disconnect from the surrounding environment Struggles with distant faces, use tools or latent upscaling to help Hands, they're normally an issue but given the focus on upscaling hands struggle an extra deal as latent upscaling adds more chances for the hands to get screwed up (May require post-gen work like in-painting) Requires additional prompting in some cases to get specific looks, use negatives to help guide clothing/environments/concepts Other quirks or issues : Doesn't like limited color pallets, will try and inject color whenever possible NSFW prompts may be difficult to get right and require experimentation with different keywords+weights < ----- ----- ----- > Hey there, so like, this stuff takes a lot of work and I have many other projects, if you'd like to support my work then why not consider a donation? You can send me donations over on Ko-Fi! https://ko-fi.com/michaelpstanich If you have any suggestions, feedback, or ideas you want to throw my way, head over to our community Discord, The Broken Chatbox! https://discord.gg/h3vB7S4FEw
sacharfd/best_mcqa_model
sacharfd
"2024-06-13T16:44:48Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-13T16:44:31Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Adnankantarevic12345/Kantar
Adnankantarevic12345
"2023-06-21T23:16:30Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-06-21T23:16:30Z"
--- license: creativeml-openrail-m ---
MinaMila/GermanCredit_Ext_Mitig_CFA_Phi_20ep
MinaMila
"2025-01-15T18:43:57Z"
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:finetune:unsloth/Phi-3.5-mini-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-15T18:41:32Z"
--- base_model: unsloth/Phi-3.5-mini-instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** MinaMila - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3.5-mini-instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lesso09/372cdd97-7905-471e-99c7-d3735d904941
lesso09
"2025-01-12T07:08:55Z"
10
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-12T06:30:36Z"
--- library_name: peft base_model: lmsys/vicuna-7b-v1.3 tags: - axolotl - generated_from_trainer model-index: - name: 372cdd97-7905-471e-99c7-d3735d904941 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-7b-v1.3 bf16: true chat_template: llama3 datasets: - data_files: - a577407676f67a03_train_data.json ds_type: json format: custom path: /workspace/input_data/a577407676f67a03_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso09/372cdd97-7905-471e-99c7-d3735d904941 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/a577407676f67a03_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e6323989-d29d-475a-b251-2cef416d5eb8 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e6323989-d29d-475a-b251-2cef416d5eb8 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 372cdd97-7905-471e-99c7-d3735d904941 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2691 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4521 | 0.0002 | 1 | 1.4518 | | 1.4959 | 0.0008 | 5 | 1.4441 | | 1.438 | 0.0016 | 10 | 1.3902 | | 1.3862 | 0.0024 | 15 | 1.3070 | | 1.1531 | 0.0033 | 20 | 1.2818 | | 1.0691 | 0.0041 | 25 | 1.2691 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Smuggling1710/Gemma2-9b-toxic-guff
Smuggling1710
"2024-07-19T20:54:22Z"
1,351
0
transformers
[ "transformers", "gguf", "gemma2", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-2-9b-bnb-4bit", "base_model:quantized:unsloth/gemma-2-9b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-19T20:48:36Z"
--- base_model: unsloth/gemma-2-9b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma2 - gguf --- # Uploaded model - **Developed by:** Smuggling1710 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AndersNielsen/distilbert-base-uncased-finetuned-emotion
AndersNielsen
"2024-10-24T23:33:06Z"
126
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-10-21T18:18:32Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1618 - Accuracy: 0.937 - F1: 0.9370 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7529 | 1.0 | 250 | 0.2673 | 0.918 | 0.9187 | | 0.1932 | 2.0 | 500 | 0.1696 | 0.9325 | 0.9322 | | 0.1291 | 3.0 | 750 | 0.1491 | 0.937 | 0.9375 | | 0.0996 | 4.0 | 1000 | 0.1465 | 0.937 | 0.9367 | | 0.0806 | 5.0 | 1250 | 0.1475 | 0.9385 | 0.9382 | | 0.0698 | 6.0 | 1500 | 0.1567 | 0.936 | 0.9360 | | 0.0595 | 7.0 | 1750 | 0.1611 | 0.934 | 0.9338 | | 0.0519 | 8.0 | 2000 | 0.1618 | 0.937 | 0.9370 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
CHL-LIVE/Baie.Comeau.vs.Rimouski.Live.CHL.Free
CHL-LIVE
"2025-02-22T21:23:26Z"
0
0
null
[ "region:us" ]
null
"2025-02-22T21:18:48Z"
<a href="https://expresssportshub.com/chl/live"><img class="alignnone wp-image-131" src="https://expresssportshub.com/wp-content/uploads/2023/10/aa-yt-watch-live.png" alt="" width="259" height="75" /></a> On February 22, 2025, the Baie-Comeau Drakkar faced off against the Rimouski Océanic in a thrilling Quebec Major Junior Hockey League (QMJHL) matchup at the Rimouski Colisée. The game was a testament to the fierce rivalry between these two teams, with both sides displaying exceptional skill and determination. The first period saw the Drakkar taking an early lead, capitalizing on a power-play opportunity to make it 1-0. The Océanic responded swiftly, equalizing the score with a well-executed breakaway goal. The period ended in a 1-1 tie, setting the stage for an intense battle. In the second period, Rimouski gained momentum, scoring two consecutive goals to establish a 3-1 advantage. However, Baie-Comeau's resilience shone through as they narrowed the gap to 3-2 with a strategic offensive play before the period concluded. The final period was a showcase of defensive prowess from both teams. Despite multiple scoring opportunities, goaltenders on each side made crucial saves to keep the game tight. In the closing minutes, the Drakkar pulled their goalie in a desperate attempt to equalize, but the Océanic secured their victory with an empty-net goal, finalizing the score at 4-2. This win was significant for Rimouski, enhancing their standings in the QMJHL and intensifying the rivalry with Baie-Comeau. Both teams demonstrated exceptional talent and sportsmanship, promising more exciting encounters in the future.
John6666/duel-pony-ani-mix-25d-v10-sdxl
John6666
"2024-12-23T06:31:14Z"
71
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "game", "realistic", "semi-realism", "2.5D", "styles", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-09-24T03:29:46Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - game - realistic - semi-realism - 2.5D - styles - pony --- Original model is [here](https://civitai.com/models/789928/duelponyanimix25d?modelVersionId=883352). This model created by [noesca](https://civitai.com/user/noesca).
RoversX/StableBeluga-7B-Qlora-Samantha-Zh-V1
RoversX
"2023-08-07T08:32:54Z"
8
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "zh", "en", "dataset:ehartford/samantha-data", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-08-03T13:38:43Z"
--- datasets: - ehartford/samantha-data language: - zh - en pipeline_tag: text-generation --- # StableBeluga-7B-Qlora-Samantha-Zh-V1 StableBeluga-7B-Qlora-Samantha-Zh-V1 is a conversational AI assistant base on [stabilityai/StableBeluga-7B](https://huggingface.co/stabilityai/StableBeluga-7B) and trained on the samantha-1.1-zh dataset from [ehartford/samantha-data](https://huggingface.co/datasets/ehartford/samantha-data). ## Model Details ![Train](https://ucarecdn.com/11ea8fe5-322c-41a7-a7cf-36de196f3421/) Stable Beluga 7B should be used with this prompt format: ``` ### System: This is a system prompt, please behave and help the user. ### User: Your prompt here ### Assistant: The output of Stable Beluga 7B ```
LarryAIDraw/corneo_marin_kitagawa
LarryAIDraw
"2023-01-25T15:38:34Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-01-25T15:38:00Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/5251/corneos-marin-kitagawa-ti-embedding
shainahub/covid_qa_distillbert
shainahub
"2021-12-15T19:10:48Z"
20
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:covid_qa_deepset", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - covid_qa_deepset metrics: - squad_v2 # Example: wer. Use metric id from https://hf.co/metrics widget: - text: "What is COVID-19?" context: "Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first known case was identified in Wuhan, China, in December 2019.[7] The disease has since spread worldwide, leading to an ongoing pandemic." - text: "Where was COVID-19 first discovered?" context: "The first known infections from SARS-CoV-2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event." - text: "What is Post-COVID syndrome?" context: "Long COVID, also known as post-COVID-19 syndrome, post-acute sequelae of COVID-19 (PASC), or chronic COVID syndrome (CCS) is a condition characterized by long-term sequelae appearing or persisting after the typical convalescence period of COVID-19. Long COVID can affect nearly every organ system, with sequelae including respiratory system disorders, nervous system and neurocognitive disorders, mental health disorders, metabolic disorders, cardiovascular disorders, gastrointestinal disorders, malaise, fatigue, musculoskeletal pain, and anemia. A wide range of symptoms are commonly reported, including fatigue, headaches, shortness of breath, anosmia (loss of smell), parosmia (distorted smell), muscle weakness, low fever and cognitive dysfunction." --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the covid_qa_deepset dataset. It achieves the following results on the evaluation set: - Loss: 0.0976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.2502 | 1.0 | 3880 | 0.1824 | | 0.2007 | 2.0 | 7760 | 0.1250 | | 0.1338 | 3.0 | 11640 | 0.0976 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF
mradermacher
"2025-01-27T14:54:50Z"
623
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "ja", "base_model:Casual-Autopsy/Llama-3-VNTL-Yollow-8B", "base_model:quantized:Casual-Autopsy/Llama-3-VNTL-Yollow-8B", "endpoints_compatible", "region:us", "imatrix" ]
null
"2025-01-27T12:09:55Z"
--- base_model: Casual-Autopsy/Llama-3-VNTL-Yollow-8B language: - en - ja library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Casual-Autopsy/Llama-3-VNTL-Yollow-8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Yollow-8B-i1-GGUF/resolve/main/Llama-3-VNTL-Yollow-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
pradma/Qwen2.5-7B-Instruct-1M-abliterated-Q8_0-GGUF
pradma
"2025-01-30T09:46:41Z"
126
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated", "base_model:quantized:huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-01-30T09:46:06Z"
--- license: apache-2.0 license_link: https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo library_name: transformers --- # pradma/Qwen2.5-7B-Instruct-1M-abliterated-Q8_0-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pradma/Qwen2.5-7B-Instruct-1M-abliterated-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pradma/Qwen2.5-7B-Instruct-1M-abliterated-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pradma/Qwen2.5-7B-Instruct-1M-abliterated-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pradma/Qwen2.5-7B-Instruct-1M-abliterated-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q8_0.gguf -c 2048 ```
sartifyllc/MultiLinguSwahili-serengeti-E250-nli-matryoshka
sartifyllc
"2024-07-07T11:32:26Z"
12
2
sentence-transformers
[ "sentence-transformers", "safetensors", "electra", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1115700", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:UBC-NLP/serengeti-E250", "base_model:finetune:UBC-NLP/serengeti-E250", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-07-07T11:22:22Z"
--- language: [] library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1115700 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: UBC-NLP/serengeti-E250 datasets: [] metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max widget: - source_sentence: Ndege mwenye mdomo mrefu katikati ya ndege. sentences: - Panya anayekimbia juu ya gurudumu. - Mtu anashindana katika mashindano ya mbio. - Ndege anayeruka. - source_sentence: Msichana mchanga mwenye nywele nyeusi anakabili kamera na kushikilia mfuko wa karatasi wakati amevaa shati la machungwa na mabawa ya kipepeo yenye rangi nyingi. sentences: - Mwanamke mzee anakataa kupigwa picha. - mtu akila na mvulana mdogo kwenye kijia cha jiji - Msichana mchanga anakabili kamera. - source_sentence: Wanawake na watoto wameketi nje katika kivuli wakati kikundi cha watoto wadogo wameketi ndani katika kivuli. sentences: - Mwanamke na watoto na kukaa chini. - Mwanamke huyo anakimbia. - Watu wanasafiri kwa baiskeli. - source_sentence: Mtoto mdogo anaruka mikononi mwa mwanamke aliyevalia suti nyeusi ya kuogelea akiwa kwenye dimbwi. sentences: - Mtoto akiruka mikononi mwa mwanamke aliyevalia suti ya kuogelea kwenye dimbwi. - Someone is holding oranges and walking - Mama na binti wakinunua viatu. - source_sentence: Mwanamume na mwanamke wachanga waliovaa mikoba wanaweka au kuondoa kitu kutoka kwenye mti mweupe wa zamani, huku watu wengine wamesimama au wameketi nyuma. sentences: - tai huruka - mwanamume na mwanamke wenye mikoba - Wanaume wawili wameketi karibu na mwanamke. pipeline_tag: sentence-similarity model-index: - name: SentenceTransformer based on UBC-NLP/serengeti-E250 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 768 type: sts-test-768 metrics: - type: pearson_cosine value: 0.7084016023985643 name: Pearson Cosine - type: spearman_cosine value: 0.7080643276583263 name: Spearman Cosine - type: pearson_manhattan value: 0.7163851544290831 name: Pearson Manhattan - type: spearman_manhattan value: 0.7066259909380899 name: Spearman Manhattan - type: pearson_euclidean value: 0.716171309296757 name: Pearson Euclidean - type: spearman_euclidean value: 0.7064427148038006 name: Spearman Euclidean - type: pearson_dot value: 0.38463559218643695 name: Pearson Dot - type: spearman_dot value: 0.3566836293112297 name: Spearman Dot - type: pearson_max value: 0.7163851544290831 name: Pearson Max - type: spearman_max value: 0.7080643276583263 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 512 type: sts-test-512 metrics: - type: pearson_cosine value: 0.7059523092716506 name: Pearson Cosine - type: spearman_cosine value: 0.7046582726338858 name: Spearman Cosine - type: pearson_manhattan value: 0.714245009590492 name: Pearson Manhattan - type: spearman_manhattan value: 0.7048777976859945 name: Spearman Manhattan - type: pearson_euclidean value: 0.7150194670982656 name: Pearson Euclidean - type: spearman_euclidean value: 0.7055458365374757 name: Spearman Euclidean - type: pearson_dot value: 0.3855295554891442 name: Pearson Dot - type: spearman_dot value: 0.3585966097040326 name: Spearman Dot - type: pearson_max value: 0.7150194670982656 name: Pearson Max - type: spearman_max value: 0.7055458365374757 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 256 type: sts-test-256 metrics: - type: pearson_cosine value: 0.7069259070512649 name: Pearson Cosine - type: spearman_cosine value: 0.7072103115498357 name: Spearman Cosine - type: pearson_manhattan value: 0.7151518946293685 name: Pearson Manhattan - type: spearman_manhattan value: 0.7050845216566457 name: Spearman Manhattan - type: pearson_euclidean value: 0.7154956682724514 name: Pearson Euclidean - type: spearman_euclidean value: 0.70486417475867 name: Spearman Euclidean - type: pearson_dot value: 0.37291132473389677 name: Pearson Dot - type: spearman_dot value: 0.3480769113927452 name: Spearman Dot - type: pearson_max value: 0.7154956682724514 name: Pearson Max - type: spearman_max value: 0.7072103115498357 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 128 type: sts-test-128 metrics: - type: pearson_cosine value: 0.7022542784280805 name: Pearson Cosine - type: spearman_cosine value: 0.7062378358777478 name: Spearman Cosine - type: pearson_manhattan value: 0.711575484251127 name: Pearson Manhattan - type: spearman_manhattan value: 0.701312903814612 name: Spearman Manhattan - type: pearson_euclidean value: 0.7125043324593673 name: Pearson Euclidean - type: spearman_euclidean value: 0.7011154675785318 name: Spearman Euclidean - type: pearson_dot value: 0.34394993785114003 name: Pearson Dot - type: spearman_dot value: 0.31686351995727197 name: Spearman Dot - type: pearson_max value: 0.7125043324593673 name: Pearson Max - type: spearman_max value: 0.7062378358777478 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 64 type: sts-test-64 metrics: - type: pearson_cosine value: 0.6950172826546709 name: Pearson Cosine - type: spearman_cosine value: 0.6993973161633343 name: Spearman Cosine - type: pearson_manhattan value: 0.7059726901866531 name: Pearson Manhattan - type: spearman_manhattan value: 0.6938542774412633 name: Spearman Manhattan - type: pearson_euclidean value: 0.7066346687971139 name: Pearson Euclidean - type: spearman_euclidean value: 0.6949014564343952 name: Spearman Euclidean - type: pearson_dot value: 0.30982738809482646 name: Pearson Dot - type: spearman_dot value: 0.2855406388879541 name: Spearman Dot - type: pearson_max value: 0.7066346687971139 name: Pearson Max - type: spearman_max value: 0.6993973161633343 name: Spearman Max --- # SentenceTransformer based on UBC-NLP/serengeti-E250 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [UBC-NLP/serengeti-E250](https://huggingface.co/UBC-NLP/serengeti-E250) on the Mollel/swahili-n_li-triplet-swh-eng dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [UBC-NLP/serengeti-E250](https://huggingface.co/UBC-NLP/serengeti-E250) <!-- at revision 41b5b8b6179c4af2859768cbf4f0f03e928d651d --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - Mollel/swahili-n_li-triplet-swh-eng <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ElectraModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sartifyllc/MultiLinguSwahili-MultiLinguSwahili-serengeti-E250-nli-matryoshka-nli-matryoshka") # Run inference sentences = [ 'Mwanamume na mwanamke wachanga waliovaa mikoba wanaweka au kuondoa kitu kutoka kwenye mti mweupe wa zamani, huku watu wengine wamesimama au wameketi nyuma.', 'mwanamume na mwanamke wenye mikoba', 'tai huruka', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-test-768` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7084 | | **spearman_cosine** | **0.7081** | | pearson_manhattan | 0.7164 | | spearman_manhattan | 0.7066 | | pearson_euclidean | 0.7162 | | spearman_euclidean | 0.7064 | | pearson_dot | 0.3846 | | spearman_dot | 0.3567 | | pearson_max | 0.7164 | | spearman_max | 0.7081 | #### Semantic Similarity * Dataset: `sts-test-512` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.706 | | **spearman_cosine** | **0.7047** | | pearson_manhattan | 0.7142 | | spearman_manhattan | 0.7049 | | pearson_euclidean | 0.715 | | spearman_euclidean | 0.7055 | | pearson_dot | 0.3855 | | spearman_dot | 0.3586 | | pearson_max | 0.715 | | spearman_max | 0.7055 | #### Semantic Similarity * Dataset: `sts-test-256` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7069 | | **spearman_cosine** | **0.7072** | | pearson_manhattan | 0.7152 | | spearman_manhattan | 0.7051 | | pearson_euclidean | 0.7155 | | spearman_euclidean | 0.7049 | | pearson_dot | 0.3729 | | spearman_dot | 0.3481 | | pearson_max | 0.7155 | | spearman_max | 0.7072 | #### Semantic Similarity * Dataset: `sts-test-128` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7023 | | **spearman_cosine** | **0.7062** | | pearson_manhattan | 0.7116 | | spearman_manhattan | 0.7013 | | pearson_euclidean | 0.7125 | | spearman_euclidean | 0.7011 | | pearson_dot | 0.3439 | | spearman_dot | 0.3169 | | pearson_max | 0.7125 | | spearman_max | 0.7062 | #### Semantic Similarity * Dataset: `sts-test-64` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.695 | | **spearman_cosine** | **0.6994** | | pearson_manhattan | 0.706 | | spearman_manhattan | 0.6939 | | pearson_euclidean | 0.7066 | | spearman_euclidean | 0.6949 | | pearson_dot | 0.3098 | | spearman_dot | 0.2855 | | pearson_max | 0.7066 | | spearman_max | 0.6994 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Mollel/swahili-n_li-triplet-swh-eng * Dataset: Mollel/swahili-n_li-triplet-swh-eng * Size: 1,115,700 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 11.27 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.0 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.56 tokens</li><li>max: 29 tokens</li></ul> | * Samples: | anchor | positive | negative | |:----------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------------------| | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> | | <code>Mtu aliyepanda farasi anaruka juu ya ndege iliyovunjika.</code> | <code>Mtu yuko nje, juu ya farasi.</code> | <code>Mtu yuko kwenye mkahawa, akiagiza omelette.</code> | | <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### Mollel/swahili-n_li-triplet-swh-eng * Dataset: Mollel/swahili-n_li-triplet-swh-eng * Size: 13,168 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 18.07 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.45 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.27 tokens</li><li>max: 29 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:-------------------------------------------------------------------| | <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> | | <code>Wanawake wawili wanakumbatiana huku wakishikilia vifurushi vya kwenda.</code> | <code>Wanawake wawili wanashikilia vifurushi.</code> | <code>Wanaume hao wanapigana nje ya duka la vyakula vitamu.</code> | | <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine | |:------:|:-----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:| | 0.0057 | 100 | 26.7003 | - | - | - | - | - | | 0.0115 | 200 | 20.7097 | - | - | - | - | - | | 0.0172 | 300 | 17.2266 | - | - | - | - | - | | 0.0229 | 400 | 15.7511 | - | - | - | - | - | | 0.0287 | 500 | 14.5329 | - | - | - | - | - | | 0.0344 | 600 | 12.6534 | - | - | - | - | - | | 0.0402 | 700 | 10.6758 | - | - | - | - | - | | 0.0459 | 800 | 9.421 | - | - | - | - | - | | 0.0516 | 900 | 9.5664 | - | - | - | - | - | | 0.0574 | 1000 | 8.5166 | - | - | - | - | - | | 0.0631 | 1100 | 8.657 | - | - | - | - | - | | 0.0688 | 1200 | 8.5473 | - | - | - | - | - | | 0.0746 | 1300 | 8.3018 | - | - | - | - | - | | 0.0803 | 1400 | 8.4488 | - | - | - | - | - | | 0.0860 | 1500 | 7.1796 | - | - | - | - | - | | 0.0918 | 1600 | 6.6136 | - | - | - | - | - | | 0.0975 | 1700 | 6.2638 | - | - | - | - | - | | 0.1033 | 1800 | 6.6955 | - | - | - | - | - | | 0.1090 | 1900 | 7.3585 | - | - | - | - | - | | 0.1147 | 2000 | 6.9043 | - | - | - | - | - | | 0.1205 | 2100 | 6.677 | - | - | - | - | - | | 0.1262 | 2200 | 6.3914 | - | - | - | - | - | | 0.1319 | 2300 | 6.0045 | - | - | - | - | - | | 0.1377 | 2400 | 5.8048 | - | - | - | - | - | | 0.1434 | 2500 | 5.6898 | - | - | - | - | - | | 0.1491 | 2600 | 5.229 | - | - | - | - | - | | 0.1549 | 2700 | 5.2407 | - | - | - | - | - | | 0.1606 | 2800 | 5.7074 | - | - | - | - | - | | 0.1664 | 2900 | 6.2917 | - | - | - | - | - | | 0.1721 | 3000 | 6.5651 | - | - | - | - | - | | 0.1778 | 3100 | 6.7751 | - | - | - | - | - | | 0.1836 | 3200 | 6.195 | - | - | - | - | - | | 0.1893 | 3300 | 5.4697 | - | - | - | - | - | | 0.1950 | 3400 | 5.1362 | - | - | - | - | - | | 0.2008 | 3500 | 5.581 | - | - | - | - | - | | 0.2065 | 3600 | 5.4309 | - | - | - | - | - | | 0.2122 | 3700 | 5.6688 | - | - | - | - | - | | 0.2180 | 3800 | 5.6923 | - | - | - | - | - | | 0.2237 | 3900 | 5.8598 | - | - | - | - | - | | 0.2294 | 4000 | 5.3498 | - | - | - | - | - | | 0.2352 | 4100 | 5.3797 | - | - | - | - | - | | 0.2409 | 4200 | 5.0389 | - | - | - | - | - | | 0.2467 | 4300 | 5.6622 | - | - | - | - | - | | 0.2524 | 4400 | 5.6249 | - | - | - | - | - | | 0.2581 | 4500 | 5.6927 | - | - | - | - | - | | 0.2639 | 4600 | 5.3612 | - | - | - | - | - | | 0.2696 | 4700 | 5.2751 | - | - | - | - | - | | 0.2753 | 4800 | 5.4224 | - | - | - | - | - | | 0.2811 | 4900 | 5.0338 | - | - | - | - | - | | 0.2868 | 5000 | 4.9813 | - | - | - | - | - | | 0.2925 | 5100 | 4.8533 | - | - | - | - | - | | 0.2983 | 5200 | 5.4137 | - | - | - | - | - | | 0.3040 | 5300 | 5.4063 | - | - | - | - | - | | 0.3098 | 5400 | 5.3107 | - | - | - | - | - | | 0.3155 | 5500 | 5.0907 | - | - | - | - | - | | 0.3212 | 5600 | 4.8644 | - | - | - | - | - | | 0.3270 | 5700 | 4.7926 | - | - | - | - | - | | 0.3327 | 5800 | 5.0268 | - | - | - | - | - | | 0.3384 | 5900 | 5.3029 | - | - | - | - | - | | 0.3442 | 6000 | 5.1246 | - | - | - | - | - | | 0.3499 | 6100 | 5.1152 | - | - | - | - | - | | 0.3556 | 6200 | 5.4265 | - | - | - | - | - | | 0.3614 | 6300 | 4.7079 | - | - | - | - | - | | 0.3671 | 6400 | 4.6368 | - | - | - | - | - | | 0.3729 | 6500 | 4.662 | - | - | - | - | - | | 0.3786 | 6600 | 5.3695 | - | - | - | - | - | | 0.3843 | 6700 | 4.6974 | - | - | - | - | - | | 0.3901 | 6800 | 4.6584 | - | - | - | - | - | | 0.3958 | 6900 | 4.7413 | - | - | - | - | - | | 0.4015 | 7000 | 4.6604 | - | - | - | - | - | | 0.4073 | 7100 | 5.2476 | - | - | - | - | - | | 0.4130 | 7200 | 4.9966 | - | - | - | - | - | | 0.4187 | 7300 | 4.656 | - | - | - | - | - | | 0.4245 | 7400 | 4.5711 | - | - | - | - | - | | 0.4302 | 7500 | 5.0256 | - | - | - | - | - | | 0.4360 | 7600 | 4.3856 | - | - | - | - | - | | 0.4417 | 7700 | 4.2548 | - | - | - | - | - | | 0.4474 | 7800 | 4.8584 | - | - | - | - | - | | 0.4532 | 7900 | 4.8563 | - | - | - | - | - | | 0.4589 | 8000 | 4.5101 | - | - | - | - | - | | 0.4646 | 8100 | 4.4688 | - | - | - | - | - | | 0.4704 | 8200 | 4.7076 | - | - | - | - | - | | 0.4761 | 8300 | 4.3268 | - | - | - | - | - | | 0.4818 | 8400 | 4.6622 | - | - | - | - | - | | 0.4876 | 8500 | 4.4808 | - | - | - | - | - | | 0.4933 | 8600 | 4.676 | - | - | - | - | - | | 0.4991 | 8700 | 5.0348 | - | - | - | - | - | | 0.5048 | 8800 | 4.5497 | - | - | - | - | - | | 0.5105 | 8900 | 4.7428 | - | - | - | - | - | | 0.5163 | 9000 | 4.4418 | - | - | - | - | - | | 0.5220 | 9100 | 4.4946 | - | - | - | - | - | | 0.5277 | 9200 | 4.5249 | - | - | - | - | - | | 0.5335 | 9300 | 4.2413 | - | - | - | - | - | | 0.5392 | 9400 | 4.4799 | - | - | - | - | - | | 0.5449 | 9500 | 4.6807 | - | - | - | - | - | | 0.5507 | 9600 | 4.5901 | - | - | - | - | - | | 0.5564 | 9700 | 4.7266 | - | - | - | - | - | | 0.5622 | 9800 | 4.692 | - | - | - | - | - | | 0.5679 | 9900 | 4.8651 | - | - | - | - | - | | 0.5736 | 10000 | 4.7746 | - | - | - | - | - | | 0.5794 | 10100 | 4.68 | - | - | - | - | - | | 0.5851 | 10200 | 4.7697 | - | - | - | - | - | | 0.5908 | 10300 | 4.8848 | - | - | - | - | - | | 0.5966 | 10400 | 4.4004 | - | - | - | - | - | | 0.6023 | 10500 | 4.2979 | - | - | - | - | - | | 0.6080 | 10600 | 4.7266 | - | - | - | - | - | | 0.6138 | 10700 | 4.8605 | - | - | - | - | - | | 0.6195 | 10800 | 4.7436 | - | - | - | - | - | | 0.6253 | 10900 | 4.6239 | - | - | - | - | - | | 0.6310 | 11000 | 4.394 | - | - | - | - | - | | 0.6367 | 11100 | 4.8081 | - | - | - | - | - | | 0.6425 | 11200 | 4.2329 | - | - | - | - | - | | 0.6482 | 11300 | 4.873 | - | - | - | - | - | | 0.6539 | 11400 | 4.5557 | - | - | - | - | - | | 0.6597 | 11500 | 4.7918 | - | - | - | - | - | | 0.6654 | 11600 | 4.1607 | - | - | - | - | - | | 0.6711 | 11700 | 4.8744 | - | - | - | - | - | | 0.6769 | 11800 | 5.0072 | - | - | - | - | - | | 0.6826 | 11900 | 4.3532 | - | - | - | - | - | | 0.6883 | 12000 | 4.3319 | - | - | - | - | - | | 0.6941 | 12100 | 4.6885 | - | - | - | - | - | | 0.6998 | 12200 | 4.6682 | - | - | - | - | - | | 0.7056 | 12300 | 4.4258 | - | - | - | - | - | | 0.7113 | 12400 | 4.6136 | - | - | - | - | - | | 0.7170 | 12500 | 4.3594 | - | - | - | - | - | | 0.7228 | 12600 | 4.0627 | - | - | - | - | - | | 0.7285 | 12700 | 4.5244 | - | - | - | - | - | | 0.7342 | 12800 | 4.504 | - | - | - | - | - | | 0.7400 | 12900 | 4.4694 | - | - | - | - | - | | 0.7457 | 13000 | 4.4804 | - | - | - | - | - | | 0.7514 | 13100 | 4.0588 | - | - | - | - | - | | 0.7572 | 13200 | 4.8016 | - | - | - | - | - | | 0.7629 | 13300 | 4.2971 | - | - | - | - | - | | 0.7687 | 13400 | 4.1326 | - | - | - | - | - | | 0.7744 | 13500 | 3.9763 | - | - | - | - | - | | 0.7801 | 13600 | 3.7716 | - | - | - | - | - | | 0.7859 | 13700 | 3.8448 | - | - | - | - | - | | 0.7916 | 13800 | 3.6779 | - | - | - | - | - | | 0.7973 | 13900 | 3.5938 | - | - | - | - | - | | 0.8031 | 14000 | 3.3981 | - | - | - | - | - | | 0.8088 | 14100 | 3.4151 | - | - | - | - | - | | 0.8145 | 14200 | 3.2498 | - | - | - | - | - | | 0.8203 | 14300 | 3.4909 | - | - | - | - | - | | 0.8260 | 14400 | 3.4098 | - | - | - | - | - | | 0.8318 | 14500 | 3.4448 | - | - | - | - | - | | 0.8375 | 14600 | 3.2868 | - | - | - | - | - | | 0.8432 | 14700 | 3.2196 | - | - | - | - | - | | 0.8490 | 14800 | 3.0852 | - | - | - | - | - | | 0.8547 | 14900 | 3.2341 | - | - | - | - | - | | 0.8604 | 15000 | 3.164 | - | - | - | - | - | | 0.8662 | 15100 | 3.0919 | - | - | - | - | - | | 0.8719 | 15200 | 3.176 | - | - | - | - | - | | 0.8776 | 15300 | 3.1361 | - | - | - | - | - | | 0.8834 | 15400 | 3.0683 | - | - | - | - | - | | 0.8891 | 15500 | 3.0275 | - | - | - | - | - | | 0.8949 | 15600 | 3.0763 | - | - | - | - | - | | 0.9006 | 15700 | 3.1828 | - | - | - | - | - | | 0.9063 | 15800 | 3.0053 | - | - | - | - | - | | 0.9121 | 15900 | 2.9696 | - | - | - | - | - | | 0.9178 | 16000 | 2.8919 | - | - | - | - | - | | 0.9235 | 16100 | 2.9922 | - | - | - | - | - | | 0.9293 | 16200 | 2.9063 | - | - | - | - | - | | 0.9350 | 16300 | 3.0633 | - | - | - | - | - | | 0.9407 | 16400 | 3.1782 | - | - | - | - | - | | 0.9465 | 16500 | 2.9206 | - | - | - | - | - | | 0.9522 | 16600 | 2.8785 | - | - | - | - | - | | 0.9580 | 16700 | 2.9934 | - | - | - | - | - | | 0.9637 | 16800 | 3.0125 | - | - | - | - | - | | 0.9694 | 16900 | 2.9338 | - | - | - | - | - | | 0.9752 | 17000 | 2.9931 | - | - | - | - | - | | 0.9809 | 17100 | 2.956 | - | - | - | - | - | | 0.9866 | 17200 | 2.8415 | - | - | - | - | - | | 0.9924 | 17300 | 3.0072 | - | - | - | - | - | | 0.9981 | 17400 | 2.9046 | - | - | - | - | - | | 1.0 | 17433 | - | 0.7062 | 0.7072 | 0.7047 | 0.6994 | 0.7081 | </details> ### Framework Versions - Python: 3.11.9 - Sentence Transformers: 3.0.1 - Transformers: 4.40.1 - PyTorch: 2.3.0+cu121 - Accelerate: 0.29.3 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
prxy5605/d7753dae-9cee-48c7-bcf6-b06c8df12b8c
prxy5605
"2025-01-19T01:23:28Z"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-01-18T23:22:11Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: d7753dae-9cee-48c7-bcf6-b06c8df12b8c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Coder-7B-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 2d958d8e90dcb1f1_train_data.json ds_type: json format: custom path: /workspace/input_data/2d958d8e90dcb1f1_train_data.json type: field_input: my_solu field_instruction: prompt field_output: solution format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5605/d7753dae-9cee-48c7-bcf6-b06c8df12b8c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/2d958d8e90dcb1f1_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ff4312a1-9301-42d6-b9d6-3e7995482f7a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ff4312a1-9301-42d6-b9d6-3e7995482f7a warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d7753dae-9cee-48c7-bcf6-b06c8df12b8c This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5687 | 0.0003 | 1 | 0.9317 | | 0.647 | 0.0143 | 50 | 0.6097 | | 0.5441 | 0.0286 | 100 | 0.5701 | | 0.5511 | 0.0429 | 150 | 0.5504 | | 0.497 | 0.0572 | 200 | 0.5460 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Aesopskenya/LanguageDetector
Aesopskenya
"2025-01-10T13:03:16Z"
35
0
null
[ "pytorch", "safetensors", "xlm-roberta", "license:apache-2.0", "region:us" ]
null
"2025-01-10T09:22:06Z"
--- license: apache-2.0 ---
LarryAIDraw/richelieu_kantaicollection
LarryAIDraw
"2024-01-26T08:57:20Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-01-26T07:27:39Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/269345/richelieu-kantai-collection
mkhan149/output_model101
mkhan149
"2025-02-20T20:04:46Z"
0
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2025-02-20T19:45:29Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: mkhan149/output_model101 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mkhan149/output_model101 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.3525 - Validation Loss: 4.8572 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -540, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.3525 | 4.8572 | 0 | ### Framework versions - Transformers 4.46.2 - TensorFlow 2.16.2 - Datasets 2.13.1 - Tokenizers 0.20.1
leoDas1999/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo
leoDas1999
"2025-02-13T12:58:17Z"
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-13T10:45:01Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** leoDas1999 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nttaii/20240711101322
nttaii
"2024-07-11T21:58:51Z"
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "token-classification", "generated_from_trainer", "base_model:microsoft/mdeberta-v3-base", "base_model:finetune:microsoft/mdeberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-07-11T03:13:46Z"
--- license: mit base_model: microsoft/mdeberta-v3-base tags: - generated_from_trainer model-index: - name: '20240711101322' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20240711101322 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3047 - Overall Strict Precision: 0.7507 - Overall Strict Recall: 0.7167 - Overall Strict F1: 0.7333 - Overall Ent Type Precision: 0.8382 - Overall Ent Type Recall: 0.8003 - Overall Ent Type F1: 0.8188 - Overall Partial Precision: 0.8076 - Overall Partial Recall: 0.7711 - Overall Partial F1: 0.7889 - Overall Exact Precision: 0.7608 - Overall Exact Recall: 0.7264 - Overall Exact F1: 0.7432 - Companyzipcode Strict Precision: 0.2192 - Companyzipcode Strict Recall: 0.9137 - Companyzipcode Strict F1: 0.3536 - Companyzipcode Ent Type Precision: 0.2254 - Companyzipcode Ent Type Recall: 0.9395 - Companyzipcode Ent Type F1: 0.3636 - Companyzipcode Partial Precision: 0.2248 - Companyzipcode Partial Recall: 0.9369 - Companyzipcode Partial F1: 0.3625 - Companyzipcode Exact Precision: 0.2209 - Companyzipcode Exact Recall: 0.9208 - Companyzipcode Exact F1: 0.3563 - Trafficdate Strict Precision: 0.0 - Trafficdate Strict Recall: 0.0 - Trafficdate Strict F1: 0 - Trafficdate Ent Type Precision: 0.0 - Trafficdate Ent Type Recall: 0.0 - Trafficdate Ent Type F1: 0 - Trafficdate Partial Precision: 0.0002 - Trafficdate Partial Recall: 0.4211 - Trafficdate Partial F1: 0.0003 - Trafficdate Exact Precision: 0.0001 - Trafficdate Exact Recall: 0.2632 - Trafficdate Exact F1: 0.0002 - 0 Strict Precision: 0.0 - 0 Strict Recall: 0 - 0 Strict F1: 0 - 0 Ent Type Precision: 0.0 - 0 Ent Type Recall: 0 - 0 Ent Type F1: 0 - 0 Partial Precision: 0.0 - 0 Partial Recall: 0 - 0 Partial F1: 0 - 0 Exact Precision: 0.0 - 0 Exact Recall: 0 - 0 Exact F1: 0 - Externalreservationnumber Strict Precision: 0.0033 - Externalreservationnumber Strict Recall: 0.2074 - Externalreservationnumber Strict F1: 0.0064 - Externalreservationnumber Ent Type Precision: 0.0036 - Externalreservationnumber Ent Type Recall: 0.2292 - Externalreservationnumber Ent Type F1: 0.0071 - Externalreservationnumber Partial Precision: 0.0062 - Externalreservationnumber Partial Recall: 0.3905 - Externalreservationnumber Partial F1: 0.0121 - Externalreservationnumber Exact Precision: 0.0057 - Externalreservationnumber Exact Recall: 0.3598 - Externalreservationnumber Exact F1: 0.0112 - Bookingnumber Strict Precision: 0.0124 - Bookingnumber Strict Recall: 0.3330 - Bookingnumber Strict F1: 0.0238 - Bookingnumber Ent Type Precision: 0.0142 - Bookingnumber Ent Type Recall: 0.3835 - Bookingnumber Ent Type F1: 0.0275 - Bookingnumber Partial Precision: 0.0172 - Bookingnumber Partial Recall: 0.4629 - Bookingnumber Partial F1: 0.0331 - Bookingnumber Exact Precision: 0.0159 - Bookingnumber Exact Recall: 0.4276 - Bookingnumber Exact F1: 0.0306 - Travellername Strict Precision: 0.1505 - Travellername Strict Recall: 0.5088 - Travellername Strict F1: 0.2323 - Travellername Ent Type Precision: 0.2157 - Travellername Ent Type Recall: 0.7295 - Travellername Ent Type F1: 0.3330 - Travellername Partial Precision: 0.1851 - Travellername Partial Recall: 0.6259 - Travellername Partial F1: 0.2857 - Travellername Exact Precision: 0.1514 - Travellername Exact Recall: 0.5121 - Travellername Exact F1: 0.2337 - Cardnumber Strict Precision: 0.0061 - Cardnumber Strict Recall: 0.3566 - Cardnumber Strict F1: 0.0119 - Cardnumber Ent Type Precision: 0.0083 - Cardnumber Ent Type Recall: 0.4846 - Cardnumber Ent Type F1: 0.0162 - Cardnumber Partial Precision: 0.0073 - Cardnumber Partial Recall: 0.4277 - Cardnumber Partial F1: 0.0143 - Cardnumber Exact Precision: 0.0062 - Cardnumber Exact Recall: 0.3614 - Cardnumber Exact F1: 0.0121 - Hoteladdress Strict Precision: 0.1132 - Hoteladdress Strict Recall: 0.4311 - Hoteladdress Strict F1: 0.1794 - Hoteladdress Ent Type Precision: 0.1774 - Hoteladdress Ent Type Recall: 0.6753 - Hoteladdress Ent Type F1: 0.2810 - Hoteladdress Partial Precision: 0.1474 - Hoteladdress Partial Recall: 0.5611 - Hoteladdress Partial F1: 0.2335 - Hoteladdress Exact Precision: 0.1136 - Hoteladdress Exact Recall: 0.4325 - Hoteladdress Exact F1: 0.1800 - Hotelvatnumber Strict Precision: 0.1161 - Hotelvatnumber Strict Recall: 0.6701 - Hotelvatnumber Strict F1: 0.1979 - Hotelvatnumber Ent Type Precision: 0.1247 - Hotelvatnumber Ent Type Recall: 0.7199 - Hotelvatnumber Ent Type F1: 0.2126 - Hotelvatnumber Partial Precision: 0.1260 - Hotelvatnumber Partial Recall: 0.7275 - Hotelvatnumber Partial F1: 0.2148 - Hotelvatnumber Exact Precision: 0.1213 - Hotelvatnumber Exact Recall: 0.7004 - Hotelvatnumber Exact F1: 0.2069 - Hoteltaxcode Strict Precision: 0.0207 - Hoteltaxcode Strict Recall: 0.3902 - Hoteltaxcode Strict F1: 0.0393 - Hoteltaxcode Ent Type Precision: 0.0221 - Hoteltaxcode Ent Type Recall: 0.4170 - Hoteltaxcode Ent Type F1: 0.0420 - Hoteltaxcode Partial Precision: 0.0265 - Hoteltaxcode Partial Recall: 0.5002 - Hoteltaxcode Partial F1: 0.0504 - Hoteltaxcode Exact Precision: 0.0253 - Hoteltaxcode Exact Recall: 0.4763 - Hoteltaxcode Exact F1: 0.0480 - Hotelfaxnumber Strict Precision: 0.1149 - Hotelfaxnumber Strict Recall: 0.7952 - Hotelfaxnumber Strict F1: 0.2007 - Hotelfaxnumber Ent Type Precision: 0.1294 - Hotelfaxnumber Ent Type Recall: 0.8959 - Hotelfaxnumber Ent Type F1: 0.2261 - Hotelfaxnumber Partial Precision: 0.1222 - Hotelfaxnumber Partial Recall: 0.8461 - Hotelfaxnumber Partial F1: 0.2136 - Hotelfaxnumber Exact Precision: 0.1149 - Hotelfaxnumber Exact Recall: 0.7954 - Hotelfaxnumber Exact F1: 0.2008 - Hotelcity Strict Precision: 0.1571 - Hotelcity Strict Recall: 0.6394 - Hotelcity Strict F1: 0.2522 - Hotelcity Ent Type Precision: 0.1728 - Hotelcity Ent Type Recall: 0.7034 - Hotelcity Ent Type F1: 0.2774 - Hotelcity Partial Precision: 0.1690 - Hotelcity Partial Recall: 0.6880 - Hotelcity Partial F1: 0.2714 - Hotelcity Exact Precision: 0.1598 - Hotelcity Exact Recall: 0.6504 - Hotelcity Exact F1: 0.2565 - Crewtype Strict Precision: 0.0 - Crewtype Strict Recall: 0.0 - Crewtype Strict F1: 0 - Crewtype Ent Type Precision: 0.0 - Crewtype Ent Type Recall: 0.0 - Crewtype Ent Type F1: 0 - Crewtype Partial Precision: 0.0 - Crewtype Partial Recall: 0.0 - Crewtype Partial F1: 0 - Crewtype Exact Precision: 0.0 - Crewtype Exact Recall: 0.0 - Crewtype Exact F1: 0 - Hotelphonenumber Strict Precision: 0.1589 - Hotelphonenumber Strict Recall: 0.7770 - Hotelphonenumber Strict F1: 0.2639 - Hotelphonenumber Ent Type Precision: 0.1852 - Hotelphonenumber Ent Type Recall: 0.9052 - Hotelphonenumber Ent Type F1: 0.3074 - Hotelphonenumber Partial Precision: 0.1724 - Hotelphonenumber Partial Recall: 0.8427 - Hotelphonenumber Partial F1: 0.2862 - Hotelphonenumber Exact Precision: 0.1592 - Hotelphonenumber Exact Recall: 0.7782 - Hotelphonenumber Exact F1: 0.2643 - Hotelname Strict Precision: 0.0876 - Hotelname Strict Recall: 0.3153 - Hotelname Strict F1: 0.1371 - Hotelname Ent Type Precision: 0.1514 - Hotelname Ent Type Recall: 0.5452 - Hotelname Ent Type F1: 0.2370 - Hotelname Partial Precision: 0.1214 - Hotelname Partial Recall: 0.4372 - Hotelname Partial F1: 0.1901 - Hotelname Exact Precision: 0.0881 - Hotelname Exact Recall: 0.3171 - Hotelname Exact F1: 0.1379 - Taxamount Strict Precision: 0.0786 - Taxamount Strict Recall: 0.7768 - Taxamount Strict F1: 0.1427 - Taxamount Ent Type Precision: 0.0889 - Taxamount Ent Type Recall: 0.8793 - Taxamount Ent Type F1: 0.1615 - Taxamount Partial Precision: 0.0847 - Taxamount Partial Recall: 0.8372 - Taxamount Partial F1: 0.1538 - Taxamount Exact Precision: 0.0794 - Taxamount Exact Recall: 0.7850 - Taxamount Exact F1: 0.1442 - Cardtype Strict Precision: 0.0421 - Cardtype Strict Recall: 0.5510 - Cardtype Strict F1: 0.0783 - Cardtype Ent Type Precision: 0.0470 - Cardtype Ent Type Recall: 0.6141 - Cardtype Ent Type F1: 0.0873 - Cardtype Partial Precision: 0.0447 - Cardtype Partial Recall: 0.5842 - Cardtype Partial F1: 0.0830 - Cardtype Exact Precision: 0.0423 - Cardtype Exact Recall: 0.5525 - Cardtype Exact F1: 0.0785 - Invoicenumber Strict Precision: 0.2015 - Invoicenumber Strict Recall: 0.8417 - Invoicenumber Strict F1: 0.3252 - Invoicenumber Ent Type Precision: 0.2144 - Invoicenumber Ent Type Recall: 0.8956 - Invoicenumber Ent Type F1: 0.3460 - Invoicenumber Partial Precision: 0.2097 - Invoicenumber Partial Recall: 0.8758 - Invoicenumber Partial F1: 0.3384 - Invoicenumber Exact Precision: 0.2031 - Invoicenumber Exact Recall: 0.8482 - Invoicenumber Exact F1: 0.3277 - Netamount Strict Precision: 0.0887 - Netamount Strict Recall: 0.7604 - Netamount Strict F1: 0.1588 - Netamount Ent Type Precision: 0.1004 - Netamount Ent Type Recall: 0.8608 - Netamount Ent Type F1: 0.1797 - Netamount Partial Precision: 0.0956 - Netamount Partial Recall: 0.8204 - Netamount Partial F1: 0.1713 - Netamount Exact Precision: 0.0896 - Netamount Exact Recall: 0.7683 - Netamount Exact F1: 0.1604 - Hotelzipcode Strict Precision: 0.1696 - Hotelzipcode Strict Recall: 0.6972 - Hotelzipcode Strict F1: 0.2729 - Hotelzipcode Ent Type Precision: 0.1809 - Hotelzipcode Ent Type Recall: 0.7437 - Hotelzipcode Ent Type F1: 0.2911 - Hotelzipcode Partial Precision: 0.1768 - Hotelzipcode Partial Recall: 0.7266 - Hotelzipcode Partial F1: 0.2844 - Hotelzipcode Exact Precision: 0.1705 - Hotelzipcode Exact Recall: 0.7007 - Hotelzipcode Exact F1: 0.2742 - Documenttype Strict Precision: 0.1822 - Documenttype Strict Recall: 0.7682 - Documenttype Strict F1: 0.2945 - Documenttype Ent Type Precision: 0.1984 - Documenttype Ent Type Recall: 0.8366 - Documenttype Ent Type F1: 0.3208 - Documenttype Partial Precision: 0.1911 - Documenttype Partial Recall: 0.8056 - Documenttype Partial F1: 0.3089 - Documenttype Exact Precision: 0.1826 - Documenttype Exact Recall: 0.7699 - Documenttype Exact F1: 0.2952 - Arrivaltrainno Strict Precision: 0.0 - Arrivaltrainno Strict Recall: 0.0 - Arrivaltrainno Strict F1: 0 - Arrivaltrainno Ent Type Precision: 0.0 - Arrivaltrainno Ent Type Recall: 0.0 - Arrivaltrainno Ent Type F1: 0 - Arrivaltrainno Partial Precision: 0.0000 - Arrivaltrainno Partial Recall: 0.3333 - Arrivaltrainno Partial F1: 0.0000 - Arrivaltrainno Exact Precision: 0.0 - Arrivaltrainno Exact Recall: 0.0 - Arrivaltrainno Exact F1: 0 - Checkout Strict Precision: 0.2401 - Checkout Strict Recall: 0.9053 - Checkout Strict F1: 0.3795 - Checkout Ent Type Precision: 0.2436 - Checkout Ent Type Recall: 0.9186 - Checkout Ent Type F1: 0.3851 - Checkout Partial Precision: 0.2437 - Checkout Partial Recall: 0.9189 - Checkout Partial F1: 0.3853 - Checkout Exact Precision: 0.2418 - Checkout Exact Recall: 0.9116 - Checkout Exact F1: 0.3822 - Hotelcountry Strict Precision: 0.0463 - Hotelcountry Strict Recall: 0.6994 - Hotelcountry Strict F1: 0.0868 - Hotelcountry Ent Type Precision: 0.0475 - Hotelcountry Ent Type Recall: 0.7181 - Hotelcountry Ent Type F1: 0.0891 - Hotelcountry Partial Precision: 0.0489 - Hotelcountry Partial Recall: 0.7401 - Hotelcountry Partial F1: 0.0918 - Hotelcountry Exact Precision: 0.0482 - Hotelcountry Exact Recall: 0.7289 - Hotelcountry Exact F1: 0.0904 - Hotelemail Strict Precision: 0.1616 - Hotelemail Strict Recall: 0.8715 - Hotelemail Strict F1: 0.2727 - Hotelemail Ent Type Precision: 0.1703 - Hotelemail Ent Type Recall: 0.9185 - Hotelemail Ent Type F1: 0.2874 - Hotelemail Partial Precision: 0.1660 - Hotelemail Partial Recall: 0.8953 - Hotelemail Partial F1: 0.2801 - Hotelemail Exact Precision: 0.1617 - Hotelemail Exact Recall: 0.8717 - Hotelemail Exact F1: 0.2727 - Checkin Strict Precision: 0.2382 - Checkin Strict Recall: 0.8892 - Checkin Strict F1: 0.3758 - Checkin Ent Type Precision: 0.2430 - Checkin Ent Type Recall: 0.9068 - Checkin Ent Type F1: 0.3832 - Checkin Partial Precision: 0.2442 - Checkin Partial Recall: 0.9113 - Checkin Partial F1: 0.3851 - Checkin Exact Precision: 0.2412 - Checkin Exact Recall: 0.9004 - Checkin Exact F1: 0.3805 - Companyname Strict Precision: 0.1722 - Companyname Strict Recall: 0.6711 - Companyname Strict F1: 0.2740 - Companyname Ent Type Precision: 0.2202 - Companyname Ent Type Recall: 0.8583 - Companyname Ent Type F1: 0.3505 - Companyname Partial Precision: 0.1993 - Companyname Partial Recall: 0.7770 - Companyname Partial F1: 0.3173 - Companyname Exact Precision: 0.1734 - Companyname Exact Recall: 0.6758 - Companyname Exact F1: 0.2760 - Hotelbankaccount Strict Precision: 0.0 - Hotelbankaccount Strict Recall: 0.0 - Hotelbankaccount Strict F1: 0 - Hotelbankaccount Ent Type Precision: 0.0 - Hotelbankaccount Ent Type Recall: 0.0 - Hotelbankaccount Ent Type F1: 0 - Hotelbankaccount Partial Precision: 0.0 - Hotelbankaccount Partial Recall: 0.0 - Hotelbankaccount Partial F1: 0 - Hotelbankaccount Exact Precision: 0.0 - Hotelbankaccount Exact Recall: 0.0 - Hotelbankaccount Exact F1: 0 - Roomno Strict Precision: 0.2261 - Roomno Strict Recall: 0.8231 - Roomno Strict F1: 0.3547 - Roomno Ent Type Precision: 0.2282 - Roomno Ent Type Recall: 0.8309 - Roomno Ent Type F1: 0.3581 - Roomno Partial Precision: 0.2304 - Roomno Partial Recall: 0.8388 - Roomno Partial F1: 0.3615 - Roomno Exact Precision: 0.2291 - Roomno Exact Recall: 0.8342 - Roomno Exact F1: 0.3595 - Companycountry Strict Precision: 0.1378 - Companycountry Strict Recall: 0.8544 - Companycountry Strict F1: 0.2374 - Companycountry Ent Type Precision: 0.1395 - Companycountry Ent Type Recall: 0.8646 - Companycountry Ent Type F1: 0.2402 - Companycountry Partial Precision: 0.1427 - Companycountry Partial Recall: 0.8843 - Companycountry Partial F1: 0.2457 - Companycountry Exact Precision: 0.1414 - Companycountry Exact Recall: 0.8764 - Companycountry Exact F1: 0.2435 - Grossamount Strict Precision: 0.1480 - Grossamount Strict Recall: 0.6662 - Grossamount Strict F1: 0.2422 - Grossamount Ent Type Precision: 0.1630 - Grossamount Ent Type Recall: 0.7340 - Grossamount Ent Type F1: 0.2668 - Grossamount Partial Precision: 0.1573 - Grossamount Partial Recall: 0.7080 - Grossamount Partial F1: 0.2574 - Grossamount Exact Precision: 0.1493 - Grossamount Exact Recall: 0.6722 - Grossamount Exact F1: 0.2444 - Companyvatnumber Strict Precision: 0.0475 - Companyvatnumber Strict Recall: 0.5826 - Companyvatnumber Strict F1: 0.0879 - Companyvatnumber Ent Type Precision: 0.0508 - Companyvatnumber Ent Type Recall: 0.6226 - Companyvatnumber Ent Type F1: 0.0939 - Companyvatnumber Partial Precision: 0.0549 - Companyvatnumber Partial Recall: 0.6731 - Companyvatnumber Partial F1: 0.1015 - Companyvatnumber Exact Precision: 0.0528 - Companyvatnumber Exact Recall: 0.6478 - Companyvatnumber Exact F1: 0.0977 - Currencycode Strict Precision: 0.0865 - Currencycode Strict Recall: 0.5113 - Currencycode Strict F1: 0.1480 - Currencycode Ent Type Precision: 0.0868 - Currencycode Ent Type Recall: 0.5129 - Currencycode Ent Type F1: 0.1485 - Currencycode Partial Precision: 0.0868 - Currencycode Partial Recall: 0.5127 - Currencycode Partial F1: 0.1484 - Currencycode Exact Precision: 0.0866 - Currencycode Exact Recall: 0.5116 - Currencycode Exact F1: 0.1481 - Reservationnumber Strict Precision: 0.0260 - Reservationnumber Strict Recall: 0.4270 - Reservationnumber Strict F1: 0.0490 - Reservationnumber Ent Type Precision: 0.0273 - Reservationnumber Ent Type Recall: 0.4483 - Reservationnumber Ent Type F1: 0.0514 - Reservationnumber Partial Precision: 0.0321 - Reservationnumber Partial Recall: 0.5273 - Reservationnumber Partial F1: 0.0605 - Reservationnumber Exact Precision: 0.0309 - Reservationnumber Exact Recall: 0.5084 - Reservationnumber Exact F1: 0.0583 - Companyaddress Strict Precision: 0.1865 - Companyaddress Strict Recall: 0.7494 - Companyaddress Strict F1: 0.2986 - Companyaddress Ent Type Precision: 0.2247 - Companyaddress Ent Type Recall: 0.9029 - Companyaddress Ent Type F1: 0.3598 - Companyaddress Partial Precision: 0.2081 - Companyaddress Partial Recall: 0.8364 - Companyaddress Partial F1: 0.3333 - Companyaddress Exact Precision: 0.1876 - Companyaddress Exact Recall: 0.7540 - Companyaddress Exact F1: 0.3005 - Invoicedate Strict Precision: 0.2145 - Invoicedate Strict Recall: 0.8946 - Invoicedate Strict F1: 0.3460 - Invoicedate Ent Type Precision: 0.2209 - Invoicedate Ent Type Recall: 0.9214 - Invoicedate Ent Type F1: 0.3564 - Invoicedate Partial Precision: 0.2193 - Invoicedate Partial Recall: 0.9145 - Invoicedate Partial F1: 0.3537 - Invoicedate Exact Precision: 0.2160 - Invoicedate Exact Recall: 0.9009 - Invoicedate Exact F1: 0.3485 - Pageno Strict Precision: 0.1203 - Pageno Strict Recall: 0.8195 - Pageno Strict F1: 0.2099 - Pageno Ent Type Precision: 0.1221 - Pageno Ent Type Recall: 0.8313 - Pageno Ent Type F1: 0.2129 - Pageno Partial Precision: 0.1215 - Pageno Partial Recall: 0.8272 - Pageno Partial F1: 0.2118 - Pageno Exact Precision: 0.1206 - Pageno Exact Recall: 0.8213 - Pageno Exact F1: 0.2103 - Companycity Strict Precision: 0.2014 - Companycity Strict Recall: 0.8308 - Companycity Strict F1: 0.3242 - Companycity Ent Type Precision: 0.2163 - Companycity Ent Type Recall: 0.8922 - Companycity Ent Type F1: 0.3482 - Companycity Partial Precision: 0.2135 - Companycity Partial Recall: 0.8806 - Companycity Partial F1: 0.3436 - Companycity Exact Precision: 0.2052 - Companycity Exact Recall: 0.8463 - Companycity Exact F1: 0.3303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.0625 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Overall Strict Precision | Overall Strict Recall | Overall Strict F1 | Overall Ent Type Precision | Overall Ent Type Recall | Overall Ent Type F1 | Overall Partial Precision | Overall Partial Recall | Overall Partial F1 | Overall Exact Precision | Overall Exact Recall | Overall Exact F1 | Companyzipcode Strict Precision | Companyzipcode Strict Recall | Companyzipcode Strict F1 | Companyzipcode Ent Type Precision | Companyzipcode Ent Type Recall | Companyzipcode Ent Type F1 | Companyzipcode Partial Precision | Companyzipcode Partial Recall | Companyzipcode Partial F1 | Companyzipcode Exact Precision | Companyzipcode Exact Recall | Companyzipcode Exact F1 | Trafficdate Strict Precision | Trafficdate Strict Recall | Trafficdate Strict F1 | Trafficdate Ent Type Precision | Trafficdate Ent Type Recall | Trafficdate Ent Type F1 | Trafficdate Partial Precision | Trafficdate Partial Recall | Trafficdate Partial F1 | Trafficdate Exact Precision | Trafficdate Exact Recall | Trafficdate Exact F1 | 0 Strict Precision | 0 Strict Recall | 0 Strict F1 | 0 Ent Type Precision | 0 Ent Type Recall | 0 Ent Type F1 | 0 Partial Precision | 0 Partial Recall | 0 Partial F1 | 0 Exact Precision | 0 Exact Recall | 0 Exact F1 | Externalreservationnumber Strict Precision | Externalreservationnumber Strict Recall | Externalreservationnumber Strict F1 | Externalreservationnumber Ent Type Precision | Externalreservationnumber Ent Type Recall | Externalreservationnumber Ent Type F1 | Externalreservationnumber Partial Precision | Externalreservationnumber Partial Recall | Externalreservationnumber Partial F1 | Externalreservationnumber Exact Precision | Externalreservationnumber Exact Recall | Externalreservationnumber Exact F1 | Bookingnumber Strict Precision | Bookingnumber Strict Recall | Bookingnumber Strict F1 | Bookingnumber Ent Type Precision | Bookingnumber Ent Type Recall | Bookingnumber Ent Type F1 | Bookingnumber Partial Precision | Bookingnumber Partial Recall | Bookingnumber Partial F1 | Bookingnumber Exact Precision | Bookingnumber Exact Recall | Bookingnumber Exact F1 | Travellername Strict Precision | Travellername Strict Recall | Travellername Strict F1 | Travellername Ent Type Precision | Travellername Ent Type Recall | Travellername Ent Type F1 | Travellername Partial Precision | Travellername Partial Recall | Travellername Partial F1 | Travellername Exact Precision | Travellername Exact Recall | Travellername Exact F1 | Cardnumber Strict Precision | Cardnumber Strict Recall | Cardnumber Strict F1 | Cardnumber Ent Type Precision | Cardnumber Ent Type Recall | Cardnumber Ent Type F1 | Cardnumber Partial Precision | Cardnumber Partial Recall | Cardnumber Partial F1 | Cardnumber Exact Precision | Cardnumber Exact Recall | Cardnumber Exact F1 | Hoteladdress Strict Precision | Hoteladdress Strict Recall | Hoteladdress Strict F1 | Hoteladdress Ent Type Precision | Hoteladdress Ent Type Recall | Hoteladdress Ent Type F1 | Hoteladdress Partial Precision | Hoteladdress Partial Recall | Hoteladdress Partial F1 | Hoteladdress Exact Precision | Hoteladdress Exact Recall | Hoteladdress Exact F1 | Hotelvatnumber Strict Precision | Hotelvatnumber Strict Recall | Hotelvatnumber Strict F1 | Hotelvatnumber Ent Type Precision | Hotelvatnumber Ent Type Recall | Hotelvatnumber Ent Type F1 | Hotelvatnumber Partial Precision | Hotelvatnumber Partial Recall | Hotelvatnumber Partial F1 | Hotelvatnumber Exact Precision | Hotelvatnumber Exact Recall | Hotelvatnumber Exact F1 | Hoteltaxcode Strict Precision | Hoteltaxcode Strict Recall | Hoteltaxcode Strict F1 | Hoteltaxcode Ent Type Precision | Hoteltaxcode Ent Type Recall | Hoteltaxcode Ent Type F1 | Hoteltaxcode Partial Precision | Hoteltaxcode Partial Recall | Hoteltaxcode Partial F1 | Hoteltaxcode Exact Precision | Hoteltaxcode Exact Recall | Hoteltaxcode Exact F1 | Hotelfaxnumber Strict Precision | Hotelfaxnumber Strict Recall | Hotelfaxnumber Strict F1 | Hotelfaxnumber Ent Type Precision | Hotelfaxnumber Ent Type Recall | Hotelfaxnumber Ent Type F1 | Hotelfaxnumber Partial Precision | Hotelfaxnumber Partial Recall | Hotelfaxnumber Partial F1 | Hotelfaxnumber Exact Precision | Hotelfaxnumber Exact Recall | Hotelfaxnumber Exact F1 | Hotelcity Strict Precision | Hotelcity Strict Recall | Hotelcity Strict F1 | Hotelcity Ent Type Precision | Hotelcity Ent Type Recall | Hotelcity Ent Type F1 | Hotelcity Partial Precision | Hotelcity Partial Recall | Hotelcity Partial F1 | Hotelcity Exact Precision | Hotelcity Exact Recall | Hotelcity Exact F1 | Crewtype Strict Precision | Crewtype Strict Recall | Crewtype Strict F1 | Crewtype Ent Type Precision | Crewtype Ent Type Recall | Crewtype Ent Type F1 | Crewtype Partial Precision | Crewtype Partial Recall | Crewtype Partial F1 | Crewtype Exact Precision | Crewtype Exact Recall | Crewtype Exact F1 | Hotelphonenumber Strict Precision | Hotelphonenumber Strict Recall | Hotelphonenumber Strict F1 | Hotelphonenumber Ent Type Precision | Hotelphonenumber Ent Type Recall | Hotelphonenumber Ent Type F1 | Hotelphonenumber Partial Precision | Hotelphonenumber Partial Recall | Hotelphonenumber Partial F1 | Hotelphonenumber Exact Precision | Hotelphonenumber Exact Recall | Hotelphonenumber Exact F1 | Hotelname Strict Precision | Hotelname Strict Recall | Hotelname Strict F1 | Hotelname Ent Type Precision | Hotelname Ent Type Recall | Hotelname Ent Type F1 | Hotelname Partial Precision | Hotelname Partial Recall | Hotelname Partial F1 | Hotelname Exact Precision | Hotelname Exact Recall | Hotelname Exact F1 | Taxamount Strict Precision | Taxamount Strict Recall | Taxamount Strict F1 | Taxamount Ent Type Precision | Taxamount Ent Type Recall | Taxamount Ent Type F1 | Taxamount Partial Precision | Taxamount Partial Recall | Taxamount Partial F1 | Taxamount Exact Precision | Taxamount Exact Recall | Taxamount Exact F1 | Cardtype Strict Precision | Cardtype Strict Recall | Cardtype Strict F1 | Cardtype Ent Type Precision | Cardtype Ent Type Recall | Cardtype Ent Type F1 | Cardtype Partial Precision | Cardtype Partial Recall | Cardtype Partial F1 | Cardtype Exact Precision | Cardtype Exact Recall | Cardtype Exact F1 | Invoicenumber Strict Precision | Invoicenumber Strict Recall | Invoicenumber Strict F1 | Invoicenumber Ent Type Precision | Invoicenumber Ent Type Recall | Invoicenumber Ent Type F1 | Invoicenumber Partial Precision | Invoicenumber Partial Recall | Invoicenumber Partial F1 | Invoicenumber Exact Precision | Invoicenumber Exact Recall | Invoicenumber Exact F1 | Netamount Strict Precision | Netamount Strict Recall | Netamount Strict F1 | Netamount Ent Type Precision | Netamount Ent Type Recall | Netamount Ent Type F1 | Netamount Partial Precision | Netamount Partial Recall | Netamount Partial F1 | Netamount Exact Precision | Netamount Exact Recall | Netamount Exact F1 | Hotelzipcode Strict Precision | Hotelzipcode Strict Recall | Hotelzipcode Strict F1 | Hotelzipcode Ent Type Precision | Hotelzipcode Ent Type Recall | Hotelzipcode Ent Type F1 | Hotelzipcode Partial Precision | Hotelzipcode Partial Recall | Hotelzipcode Partial F1 | Hotelzipcode Exact Precision | Hotelzipcode Exact Recall | Hotelzipcode Exact F1 | Documenttype Strict Precision | Documenttype Strict Recall | Documenttype Strict F1 | Documenttype Ent Type Precision | Documenttype Ent Type Recall | Documenttype Ent Type F1 | Documenttype Partial Precision | Documenttype Partial Recall | Documenttype Partial F1 | Documenttype Exact Precision | Documenttype Exact Recall | Documenttype Exact F1 | Arrivaltrainno Strict Precision | Arrivaltrainno Strict Recall | Arrivaltrainno Strict F1 | Arrivaltrainno Ent Type Precision | Arrivaltrainno Ent Type Recall | Arrivaltrainno Ent Type F1 | Arrivaltrainno Partial Precision | Arrivaltrainno Partial Recall | Arrivaltrainno Partial F1 | Arrivaltrainno Exact Precision | Arrivaltrainno Exact Recall | Arrivaltrainno Exact F1 | Checkout Strict Precision | Checkout Strict Recall | Checkout Strict F1 | Checkout Ent Type Precision | Checkout Ent Type Recall | Checkout Ent Type F1 | Checkout Partial Precision | Checkout Partial Recall | Checkout Partial F1 | Checkout Exact Precision | Checkout Exact Recall | Checkout Exact F1 | Hotelcountry Strict Precision | Hotelcountry Strict Recall | Hotelcountry Strict F1 | Hotelcountry Ent Type Precision | Hotelcountry Ent Type Recall | Hotelcountry Ent Type F1 | Hotelcountry Partial Precision | Hotelcountry Partial Recall | Hotelcountry Partial F1 | Hotelcountry Exact Precision | Hotelcountry Exact Recall | Hotelcountry Exact F1 | Companyname Strict Precision | Companyname Strict Recall | Companyname Strict F1 | Companyname Ent Type Precision | Companyname Ent Type Recall | Companyname Ent Type F1 | Companyname Partial Precision | Companyname Partial Recall | Companyname Partial F1 | Companyname Exact Precision | Companyname Exact Recall | Companyname Exact F1 | Hotelemail Strict Precision | Hotelemail Strict Recall | Hotelemail Strict F1 | Hotelemail Ent Type Precision | Hotelemail Ent Type Recall | Hotelemail Ent Type F1 | Hotelemail Partial Precision | Hotelemail Partial Recall | Hotelemail Partial F1 | Hotelemail Exact Precision | Hotelemail Exact Recall | Hotelemail Exact F1 | Checkin Strict Precision | Checkin Strict Recall | Checkin Strict F1 | Checkin Ent Type Precision | Checkin Ent Type Recall | Checkin Ent Type F1 | Checkin Partial Precision | Checkin Partial Recall | Checkin Partial F1 | Checkin Exact Precision | Checkin Exact Recall | Checkin Exact F1 | Hotelbankaccount Strict Precision | Hotelbankaccount Strict Recall | Hotelbankaccount Strict F1 | Hotelbankaccount Ent Type Precision | Hotelbankaccount Ent Type Recall | Hotelbankaccount Ent Type F1 | Hotelbankaccount Partial Precision | Hotelbankaccount Partial Recall | Hotelbankaccount Partial F1 | Hotelbankaccount Exact Precision | Hotelbankaccount Exact Recall | Hotelbankaccount Exact F1 | Roomno Strict Precision | Roomno Strict Recall | Roomno Strict F1 | Roomno Ent Type Precision | Roomno Ent Type Recall | Roomno Ent Type F1 | Roomno Partial Precision | Roomno Partial Recall | Roomno Partial F1 | Roomno Exact Precision | Roomno Exact Recall | Roomno Exact F1 | Companycountry Strict Precision | Companycountry Strict Recall | Companycountry Strict F1 | Companycountry Ent Type Precision | Companycountry Ent Type Recall | Companycountry Ent Type F1 | Companycountry Partial Precision | Companycountry Partial Recall | Companycountry Partial F1 | Companycountry Exact Precision | Companycountry Exact Recall | Companycountry Exact F1 | Grossamount Strict Precision | Grossamount Strict Recall | Grossamount Strict F1 | Grossamount Ent Type Precision | Grossamount Ent Type Recall | Grossamount Ent Type F1 | Grossamount Partial Precision | Grossamount Partial Recall | Grossamount Partial F1 | Grossamount Exact Precision | Grossamount Exact Recall | Grossamount Exact F1 | Companyvatnumber Strict Precision | Companyvatnumber Strict Recall | Companyvatnumber Strict F1 | Companyvatnumber Ent Type Precision | Companyvatnumber Ent Type Recall | Companyvatnumber Ent Type F1 | Companyvatnumber Partial Precision | Companyvatnumber Partial Recall | Companyvatnumber Partial F1 | Companyvatnumber Exact Precision | Companyvatnumber Exact Recall | Companyvatnumber Exact F1 | Currencycode Strict Precision | Currencycode Strict Recall | Currencycode Strict F1 | Currencycode Ent Type Precision | Currencycode Ent Type Recall | Currencycode Ent Type F1 | Currencycode Partial Precision | Currencycode Partial Recall | Currencycode Partial F1 | Currencycode Exact Precision | Currencycode Exact Recall | Currencycode Exact F1 | Reservationnumber Strict Precision | Reservationnumber Strict Recall | Reservationnumber Strict F1 | Reservationnumber Ent Type Precision | Reservationnumber Ent Type Recall | Reservationnumber Ent Type F1 | Reservationnumber Partial Precision | Reservationnumber Partial Recall | Reservationnumber Partial F1 | Reservationnumber Exact Precision | Reservationnumber Exact Recall | Reservationnumber Exact F1 | Companyaddress Strict Precision | Companyaddress Strict Recall | Companyaddress Strict F1 | Companyaddress Ent Type Precision | Companyaddress Ent Type Recall | Companyaddress Ent Type F1 | Companyaddress Partial Precision | Companyaddress Partial Recall | Companyaddress Partial F1 | Companyaddress Exact Precision | Companyaddress Exact Recall | Companyaddress Exact F1 | Invoicedate Strict Precision | Invoicedate Strict Recall | Invoicedate Strict F1 | Invoicedate Ent Type Precision | Invoicedate Ent Type Recall | Invoicedate Ent Type F1 | Invoicedate Partial Precision | Invoicedate Partial Recall | Invoicedate Partial F1 | Invoicedate Exact Precision | Invoicedate Exact Recall | Invoicedate Exact F1 | Pageno Strict Precision | Pageno Strict Recall | Pageno Strict F1 | Pageno Ent Type Precision | Pageno Ent Type Recall | Pageno Ent Type F1 | Pageno Partial Precision | Pageno Partial Recall | Pageno Partial F1 | Pageno Exact Precision | Pageno Exact Recall | Pageno Exact F1 | Companycity Strict Precision | Companycity Strict Recall | Companycity Strict F1 | Companycity Ent Type Precision | Companycity Ent Type Recall | Companycity Ent Type F1 | Companycity Partial Precision | Companycity Partial Recall | Companycity Partial F1 | Companycity Exact Precision | Companycity Exact Recall | Companycity Exact F1 | |:-------------:|:-----:|:------:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------------:|:--------------------:|:----------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:------------------:|:---------------:|:-----------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|:----------------:|:------------:|:-----------------:|:--------------:|:----------:|:------------------------------------------:|:---------------------------------------:|:-----------------------------------:|:--------------------------------------------:|:-----------------------------------------:|:-------------------------------------:|:-------------------------------------------:|:----------------------------------------:|:------------------------------------:|:-----------------------------------------:|:--------------------------------------:|:----------------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-------------------------:|:----------------------:|:------------------:|:-------------------------:|:----------------------:|:------------------:|:---------------------------:|:------------------------:|:--------------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:-----------------------------------:|:--------------------------------:|:----------------------------:|:----------------------------------:|:-------------------------------:|:---------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-------------------------:|:----------------------:|:------------------:|:--------------------------:|:-----------------------:|:-------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-------------------------:|:----------------------:|:------------------:|:-------------------------:|:----------------------:|:------------------:|:---------------------------:|:------------------------:|:--------------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------------:|:---------------------------:|:-----------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-------------------------:|:----------------------:|:------------------:|:---------------------------:|:------------------------:|:--------------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:---------------------------:|:------------------------:|:--------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------------:|:--------------------:|:----------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:-----------------------------------:|:--------------------------------:|:----------------------------:|:----------------------------------:|:-------------------------------:|:---------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-----------------------:|:--------------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:------------------------:|:---------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:-----------------------------------:|:--------------------------------:|:----------------------------:|:----------------------------------:|:-------------------------------:|:---------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:----------------------------------:|:-------------------------------:|:---------------------------:|:------------------------------------:|:---------------------------------:|:-----------------------------:|:-----------------------------------:|:--------------------------------:|:----------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:-----------------------:|:--------------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:------------------------:|:---------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:| | 0.1743 | 1.0 | 18055 | 0.2317 | 0.7239 | 0.6817 | 0.7022 | 0.8271 | 0.7788 | 0.8022 | 0.7918 | 0.7456 | 0.7680 | 0.7355 | 0.6926 | 0.7134 | 0.2090 | 0.8820 | 0.3379 | 0.2176 | 0.9185 | 0.3519 | 0.2164 | 0.9133 | 0.3499 | 0.2112 | 0.8916 | 0.3415 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0.0 | 0 | 0.0002 | 0.2 | 0.0004 | 0.0005 | 0.4636 | 0.0010 | 0.0002 | 0.2 | 0.0004 | 0.0088 | 0.2846 | 0.0171 | 0.0114 | 0.3683 | 0.0221 | 0.0130 | 0.4200 | 0.0252 | 0.0108 | 0.3484 | 0.0209 | 0.1285 | 0.4583 | 0.2007 | 0.1955 | 0.6970 | 0.3053 | 0.1663 | 0.5931 | 0.2598 | 0.1307 | 0.4662 | 0.2042 | 0.0067 | 0.3822 | 0.0132 | 0.0091 | 0.5209 | 0.0180 | 0.0080 | 0.4537 | 0.0156 | 0.0067 | 0.3844 | 0.0133 | 0.0963 | 0.3788 | 0.1535 | 0.1711 | 0.6731 | 0.2728 | 0.1375 | 0.5408 | 0.2192 | 0.0969 | 0.3814 | 0.1546 | 0.1075 | 0.6661 | 0.1851 | 0.1201 | 0.7443 | 0.2069 | 0.1189 | 0.7366 | 0.2047 | 0.1119 | 0.6931 | 0.1926 | 0.0181 | 0.3388 | 0.0344 | 0.0213 | 0.3986 | 0.0404 | 0.0230 | 0.4294 | 0.0436 | 0.0212 | 0.3964 | 0.0402 | 0.1117 | 0.8083 | 0.1962 | 0.1252 | 0.9067 | 0.2201 | 0.1187 | 0.8590 | 0.2085 | 0.1117 | 0.8085 | 0.1963 | 0.1387 | 0.5557 | 0.2220 | 0.1601 | 0.6414 | 0.2563 | 0.1566 | 0.6271 | 0.2506 | 0.1432 | 0.5736 | 0.2292 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.1540 | 0.7823 | 0.2574 | 0.1789 | 0.9086 | 0.2989 | 0.1667 | 0.8469 | 0.2786 | 0.1542 | 0.7830 | 0.2576 | 0.0736 | 0.2973 | 0.1180 | 0.1378 | 0.5567 | 0.2209 | 0.1082 | 0.4372 | 0.1735 | 0.0739 | 0.2986 | 0.1185 | 0.0708 | 0.7381 | 0.1292 | 0.0816 | 0.8507 | 0.1490 | 0.0773 | 0.8056 | 0.1411 | 0.0717 | 0.7478 | 0.1309 | 0.0397 | 0.4325 | 0.0728 | 0.0458 | 0.4988 | 0.0839 | 0.0429 | 0.4673 | 0.0786 | 0.0399 | 0.4340 | 0.0730 | 0.1916 | 0.7942 | 0.3088 | 0.2104 | 0.8720 | 0.3390 | 0.2030 | 0.8416 | 0.3272 | 0.1933 | 0.8013 | 0.3115 | 0.0807 | 0.6741 | 0.1441 | 0.0936 | 0.7822 | 0.1672 | 0.0898 | 0.7504 | 0.1604 | 0.0830 | 0.6936 | 0.1483 | 0.1559 | 0.6593 | 0.2522 | 0.1716 | 0.7256 | 0.2776 | 0.1660 | 0.7019 | 0.2685 | 0.1575 | 0.6661 | 0.2548 | 0.1808 | 0.7190 | 0.2890 | 0.2013 | 0.8005 | 0.3218 | 0.1918 | 0.7624 | 0.3064 | 0.1809 | 0.7193 | 0.2892 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.2368 | 0.8784 | 0.3730 | 0.2413 | 0.8952 | 0.3802 | 0.2414 | 0.8955 | 0.3803 | 0.2389 | 0.8862 | 0.3763 | 0.0337 | 0.5778 | 0.0637 | 0.0347 | 0.5948 | 0.0655 | 0.0368 | 0.6319 | 0.0696 | 0.0363 | 0.6225 | 0.0686 | 0.1585 | 0.6309 | 0.2534 | 0.2106 | 0.8381 | 0.3366 | 0.1885 | 0.7500 | 0.3012 | 0.1598 | 0.6359 | 0.2554 | 0.1497 | 0.8249 | 0.2534 | 0.1633 | 0.8996 | 0.2764 | 0.1565 | 0.8625 | 0.2650 | 0.1498 | 0.8251 | 0.2535 | 0.2342 | 0.8870 | 0.3705 | 0.2401 | 0.9094 | 0.3799 | 0.2405 | 0.9110 | 0.3805 | 0.2371 | 0.8980 | 0.3751 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.1997 | 0.7560 | 0.3159 | 0.2006 | 0.7596 | 0.3174 | 0.2030 | 0.7688 | 0.3212 | 0.2025 | 0.7667 | 0.3204 | 0.1275 | 0.7950 | 0.2197 | 0.1289 | 0.8042 | 0.2223 | 0.1334 | 0.8319 | 0.2299 | 0.1319 | 0.8225 | 0.2273 | 0.1423 | 0.6881 | 0.2358 | 0.1581 | 0.7644 | 0.2619 | 0.1520 | 0.7354 | 0.2520 | 0.1441 | 0.6969 | 0.2388 | 0.0453 | 0.5350 | 0.0835 | 0.0499 | 0.5893 | 0.0920 | 0.0552 | 0.6518 | 0.1017 | 0.0521 | 0.6160 | 0.0962 | 0.0820 | 0.4549 | 0.1389 | 0.0826 | 0.4581 | 0.1399 | 0.0825 | 0.4578 | 0.1398 | 0.0821 | 0.4558 | 0.1392 | 0.0238 | 0.4498 | 0.0452 | 0.0264 | 0.4989 | 0.0502 | 0.0314 | 0.5937 | 0.0597 | 0.0294 | 0.5548 | 0.0558 | 0.1749 | 0.7092 | 0.2806 | 0.2174 | 0.8817 | 0.3488 | 0.1992 | 0.8080 | 0.3196 | 0.1764 | 0.7155 | 0.2831 | 0.2076 | 0.8929 | 0.3369 | 0.2148 | 0.9241 | 0.3486 | 0.2127 | 0.9149 | 0.3452 | 0.2088 | 0.8979 | 0.3388 | 0.1170 | 0.8096 | 0.2045 | 0.1187 | 0.8213 | 0.2075 | 0.1182 | 0.8177 | 0.2066 | 0.1174 | 0.8117 | 0.2051 | 0.1826 | 0.7313 | 0.2923 | 0.2055 | 0.8230 | 0.3289 | 0.2016 | 0.8073 | 0.3227 | 0.1878 | 0.7520 | 0.3006 | | 0.1247 | 2.0 | 36110 | 0.2485 | 0.7498 | 0.7102 | 0.7294 | 0.8382 | 0.7940 | 0.8155 | 0.8084 | 0.7657 | 0.7865 | 0.7608 | 0.7206 | 0.7401 | 0.2206 | 0.9061 | 0.3548 | 0.2275 | 0.9344 | 0.3659 | 0.2268 | 0.9315 | 0.3648 | 0.2225 | 0.9140 | 0.3579 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0017 | 0.2222 | 0.0033 | 0.0019 | 0.2520 | 0.0038 | 0.0032 | 0.4255 | 0.0064 | 0.0029 | 0.3821 | 0.0057 | 0.0145 | 0.3803 | 0.0279 | 0.0168 | 0.4412 | 0.0324 | 0.0190 | 0.5003 | 0.0367 | 0.0175 | 0.4592 | 0.0337 | 0.1491 | 0.5025 | 0.2299 | 0.2155 | 0.7264 | 0.3324 | 0.1840 | 0.6201 | 0.2838 | 0.1498 | 0.5049 | 0.2310 | 0.0062 | 0.3294 | 0.0123 | 0.0092 | 0.4844 | 0.0180 | 0.0078 | 0.4107 | 0.0153 | 0.0063 | 0.3326 | 0.0124 | 0.1106 | 0.4258 | 0.1756 | 0.1767 | 0.6802 | 0.2805 | 0.1465 | 0.5639 | 0.2326 | 0.1113 | 0.4282 | 0.1766 | 0.1155 | 0.6701 | 0.1970 | 0.1240 | 0.7193 | 0.2115 | 0.1244 | 0.7218 | 0.2122 | 0.1198 | 0.6954 | 0.2044 | 0.0289 | 0.3930 | 0.0539 | 0.0305 | 0.4142 | 0.0568 | 0.0360 | 0.4888 | 0.0671 | 0.0343 | 0.4658 | 0.0639 | 0.1173 | 0.8009 | 0.2047 | 0.1310 | 0.8942 | 0.2285 | 0.1245 | 0.8499 | 0.2172 | 0.1174 | 0.8014 | 0.2048 | 0.1528 | 0.6005 | 0.2436 | 0.1719 | 0.6756 | 0.2740 | 0.1677 | 0.6591 | 0.2673 | 0.1569 | 0.6168 | 0.2502 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.1625 | 0.7943 | 0.2699 | 0.1862 | 0.9099 | 0.3091 | 0.1747 | 0.8537 | 0.2900 | 0.1628 | 0.7955 | 0.2702 | 0.0824 | 0.3075 | 0.1300 | 0.1440 | 0.5375 | 0.2272 | 0.1153 | 0.4301 | 0.1818 | 0.0829 | 0.3091 | 0.1307 | 0.0781 | 0.7418 | 0.1414 | 0.0887 | 0.8421 | 0.1605 | 0.0848 | 0.8046 | 0.1533 | 0.0793 | 0.7532 | 0.1436 | 0.0421 | 0.5284 | 0.0779 | 0.0476 | 0.5981 | 0.0882 | 0.0450 | 0.5650 | 0.0833 | 0.0422 | 0.5297 | 0.0781 | 0.2001 | 0.8372 | 0.3230 | 0.2144 | 0.8973 | 0.3461 | 0.2087 | 0.8732 | 0.3369 | 0.2013 | 0.8423 | 0.3249 | 0.0875 | 0.7355 | 0.1564 | 0.0996 | 0.8367 | 0.1780 | 0.0950 | 0.7983 | 0.1698 | 0.0886 | 0.7450 | 0.1584 | 0.1668 | 0.6862 | 0.2683 | 0.1785 | 0.7343 | 0.2871 | 0.1741 | 0.7163 | 0.2801 | 0.1676 | 0.6894 | 0.2696 | 0.1888 | 0.7490 | 0.3016 | 0.2069 | 0.8207 | 0.3305 | 0.1984 | 0.7869 | 0.3169 | 0.1889 | 0.7494 | 0.3018 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.2459 | 0.8971 | 0.3860 | 0.2495 | 0.9103 | 0.3916 | 0.2495 | 0.9102 | 0.3916 | 0.2474 | 0.9027 | 0.3884 | 0.0440 | 0.6599 | 0.0825 | 0.0452 | 0.6784 | 0.0848 | 0.0470 | 0.7043 | 0.0881 | 0.0461 | 0.6911 | 0.0864 | 0.1750 | 0.6657 | 0.2772 | 0.2226 | 0.8468 | 0.3526 | 0.2032 | 0.7727 | 0.3217 | 0.1766 | 0.6717 | 0.2797 | 0.1592 | 0.8509 | 0.2683 | 0.1713 | 0.9155 | 0.2886 | 0.1653 | 0.8835 | 0.2785 | 0.1593 | 0.8511 | 0.2683 | 0.2448 | 0.8861 | 0.3836 | 0.2497 | 0.9040 | 0.3914 | 0.2512 | 0.9092 | 0.3936 | 0.2483 | 0.8987 | 0.3890 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.2205 | 0.8084 | 0.3465 | 0.2217 | 0.8128 | 0.3484 | 0.2240 | 0.8210 | 0.3519 | 0.2233 | 0.8186 | 0.3509 | 0.1390 | 0.8472 | 0.2387 | 0.1403 | 0.8554 | 0.2410 | 0.1432 | 0.8730 | 0.2460 | 0.1420 | 0.8658 | 0.2440 | 0.1566 | 0.6681 | 0.2537 | 0.1704 | 0.7273 | 0.2761 | 0.1652 | 0.7053 | 0.2678 | 0.1581 | 0.6748 | 0.2562 | 0.0481 | 0.5195 | 0.0881 | 0.0526 | 0.5680 | 0.0963 | 0.0603 | 0.6514 | 0.1104 | 0.0573 | 0.6184 | 0.1048 | 0.0910 | 0.5044 | 0.1541 | 0.0920 | 0.5103 | 0.1559 | 0.0917 | 0.5084 | 0.1554 | 0.0911 | 0.5053 | 0.1544 | 0.0282 | 0.5088 | 0.0535 | 0.0300 | 0.5406 | 0.0568 | 0.0342 | 0.6171 | 0.0649 | 0.0329 | 0.5926 | 0.0623 | 0.1893 | 0.7503 | 0.3023 | 0.2271 | 0.9003 | 0.3627 | 0.2109 | 0.8359 | 0.3368 | 0.1904 | 0.7549 | 0.3042 | 0.2182 | 0.9038 | 0.3515 | 0.2234 | 0.9255 | 0.3599 | 0.2229 | 0.9235 | 0.3591 | 0.2202 | 0.9122 | 0.3547 | 0.1232 | 0.8415 | 0.2150 | 0.1250 | 0.8538 | 0.2181 | 0.1243 | 0.8491 | 0.2169 | 0.1234 | 0.8429 | 0.2153 | 0.2015 | 0.8081 | 0.3226 | 0.2181 | 0.8748 | 0.3492 | 0.2163 | 0.8673 | 0.3462 | 0.2067 | 0.8292 | 0.3310 | | 0.0983 | 3.0 | 54165 | 0.2514 | 0.7551 | 0.7198 | 0.7370 | 0.8417 | 0.8024 | 0.8216 | 0.8125 | 0.7745 | 0.7930 | 0.7659 | 0.7302 | 0.7476 | 0.2244 | 0.9089 | 0.3599 | 0.2309 | 0.9355 | 0.3704 | 0.2303 | 0.9329 | 0.3694 | 0.2262 | 0.9164 | 0.3629 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0035 | 0.2243 | 0.0070 | 0.0042 | 0.2665 | 0.0083 | 0.0067 | 0.4228 | 0.0132 | 0.0062 | 0.3931 | 0.0122 | 0.0153 | 0.3452 | 0.0293 | 0.0169 | 0.3818 | 0.0324 | 0.0208 | 0.4683 | 0.0398 | 0.0194 | 0.4374 | 0.0372 | 0.1571 | 0.5264 | 0.2420 | 0.2200 | 0.7373 | 0.3389 | 0.1903 | 0.6378 | 0.2932 | 0.1578 | 0.5289 | 0.2431 | 0.0072 | 0.3349 | 0.0141 | 0.0102 | 0.4734 | 0.0199 | 0.0087 | 0.4061 | 0.0171 | 0.0072 | 0.3369 | 0.0142 | 0.1163 | 0.4507 | 0.1848 | 0.1810 | 0.7016 | 0.2877 | 0.1505 | 0.5836 | 0.2393 | 0.1165 | 0.4518 | 0.1853 | 0.1156 | 0.6659 | 0.1970 | 0.1258 | 0.7247 | 0.2144 | 0.1252 | 0.7210 | 0.2133 | 0.1197 | 0.6895 | 0.2040 | 0.0224 | 0.3602 | 0.0422 | 0.0244 | 0.3925 | 0.0460 | 0.0299 | 0.4809 | 0.0563 | 0.0282 | 0.4531 | 0.0531 | 0.1204 | 0.8100 | 0.2096 | 0.1338 | 0.9000 | 0.2329 | 0.1275 | 0.8575 | 0.2219 | 0.1207 | 0.8118 | 0.2101 | 0.1580 | 0.6140 | 0.2513 | 0.1768 | 0.6872 | 0.2812 | 0.1737 | 0.6755 | 0.2764 | 0.1625 | 0.6318 | 0.2585 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.1671 | 0.8083 | 0.2770 | 0.1909 | 0.9232 | 0.3164 | 0.1793 | 0.8671 | 0.2971 | 0.1673 | 0.8091 | 0.2772 | 0.0855 | 0.3056 | 0.1336 | 0.1496 | 0.5349 | 0.2338 | 0.1196 | 0.4276 | 0.1869 | 0.0858 | 0.3067 | 0.1340 | 0.0791 | 0.7719 | 0.1434 | 0.0906 | 0.8842 | 0.1643 | 0.0855 | 0.8345 | 0.1551 | 0.0796 | 0.7775 | 0.1445 | 0.0446 | 0.5133 | 0.0821 | 0.0499 | 0.5733 | 0.0917 | 0.0473 | 0.5442 | 0.0871 | 0.0447 | 0.5140 | 0.0822 | 0.2044 | 0.8322 | 0.3282 | 0.2201 | 0.8959 | 0.3533 | 0.2138 | 0.8705 | 0.3433 | 0.2058 | 0.8380 | 0.3305 | 0.0920 | 0.7694 | 0.1643 | 0.1034 | 0.8654 | 0.1848 | 0.0989 | 0.8271 | 0.1766 | 0.0930 | 0.7783 | 0.1662 | 0.1722 | 0.7061 | 0.2768 | 0.1839 | 0.7541 | 0.2956 | 0.1794 | 0.7356 | 0.2884 | 0.1729 | 0.7090 | 0.2779 | 0.1936 | 0.7682 | 0.3092 | 0.2103 | 0.8347 | 0.3360 | 0.2028 | 0.8047 | 0.3239 | 0.1939 | 0.7695 | 0.3097 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.2501 | 0.9 | 0.3914 | 0.2533 | 0.9116 | 0.3965 | 0.2540 | 0.9140 | 0.3975 | 0.2522 | 0.9075 | 0.3947 | 0.0463 | 0.6974 | 0.0868 | 0.0476 | 0.7166 | 0.0892 | 0.0494 | 0.7440 | 0.0926 | 0.0485 | 0.7307 | 0.0910 | 0.1665 | 0.8698 | 0.2795 | 0.1752 | 0.9153 | 0.2941 | 0.1709 | 0.8929 | 0.2869 | 0.1666 | 0.8701 | 0.2796 | 0.2484 | 0.8958 | 0.3889 | 0.2524 | 0.9104 | 0.3953 | 0.2545 | 0.9178 | 0.3985 | 0.2520 | 0.9090 | 0.3947 | 0.1787 | 0.6826 | 0.2832 | 0.2260 | 0.8636 | 0.3583 | 0.2061 | 0.7873 | 0.3267 | 0.1800 | 0.6878 | 0.2853 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2293 | 0.8028 | 0.3567 | 0.2336 | 0.8178 | 0.3634 | 0.2350 | 0.8228 | 0.3656 | 0.2327 | 0.8148 | 0.3620 | 0.1410 | 0.8599 | 0.2423 | 0.1421 | 0.8667 | 0.2442 | 0.1454 | 0.8866 | 0.2498 | 0.1443 | 0.8804 | 0.2480 | 0.1598 | 0.6782 | 0.2587 | 0.1746 | 0.7406 | 0.2825 | 0.1692 | 0.7177 | 0.2738 | 0.1614 | 0.6846 | 0.2612 | 0.0475 | 0.5852 | 0.0878 | 0.0512 | 0.6316 | 0.0948 | 0.0553 | 0.6820 | 0.1024 | 0.0528 | 0.6511 | 0.0977 | 0.0888 | 0.5225 | 0.1518 | 0.0899 | 0.5289 | 0.1536 | 0.0895 | 0.5267 | 0.1530 | 0.0890 | 0.5235 | 0.1521 | 0.0278 | 0.4286 | 0.0522 | 0.0295 | 0.4558 | 0.0555 | 0.0355 | 0.5476 | 0.0667 | 0.0340 | 0.5245 | 0.0638 | 0.1940 | 0.7635 | 0.3093 | 0.2306 | 0.9076 | 0.3677 | 0.2151 | 0.8467 | 0.3430 | 0.1952 | 0.7684 | 0.3113 | 0.2213 | 0.9000 | 0.3553 | 0.2275 | 0.9250 | 0.3652 | 0.2263 | 0.9203 | 0.3633 | 0.2231 | 0.9073 | 0.3581 | 0.1244 | 0.8484 | 0.2170 | 0.1262 | 0.8608 | 0.2202 | 0.1255 | 0.8559 | 0.2189 | 0.1246 | 0.8497 | 0.2173 | 0.2043 | 0.8202 | 0.3272 | 0.2220 | 0.8911 | 0.3554 | 0.2185 | 0.8772 | 0.3499 | 0.2086 | 0.8375 | 0.3341 | | 0.0815 | 4.0 | 72220 | 0.2789 | 0.7536 | 0.7228 | 0.7379 | 0.8386 | 0.8044 | 0.8212 | 0.8093 | 0.7763 | 0.7924 | 0.7636 | 0.7325 | 0.7477 | 0.2202 | 0.9085 | 0.3545 | 0.2263 | 0.9336 | 0.3643 | 0.2259 | 0.9320 | 0.3637 | 0.2222 | 0.9167 | 0.3577 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0018 | 0.2211 | 0.0036 | 0.0023 | 0.2776 | 0.0046 | 0.0037 | 0.4509 | 0.0074 | 0.0034 | 0.4054 | 0.0067 | 0.0127 | 0.3679 | 0.0245 | 0.0145 | 0.4214 | 0.0281 | 0.0168 | 0.4886 | 0.0326 | 0.0153 | 0.4453 | 0.0297 | 0.1540 | 0.5269 | 0.2384 | 0.2175 | 0.7439 | 0.3366 | 0.1875 | 0.6415 | 0.2902 | 0.1550 | 0.5301 | 0.2398 | 0.0076 | 0.3692 | 0.0149 | 0.0108 | 0.5230 | 0.0212 | 0.0093 | 0.4496 | 0.0182 | 0.0077 | 0.3722 | 0.0151 | 0.1189 | 0.4584 | 0.1888 | 0.1802 | 0.6948 | 0.2862 | 0.1520 | 0.5861 | 0.2414 | 0.1193 | 0.4600 | 0.1895 | 0.1158 | 0.6833 | 0.1981 | 0.1239 | 0.7311 | 0.2119 | 0.1242 | 0.7328 | 0.2124 | 0.1196 | 0.7057 | 0.2046 | 0.0248 | 0.3627 | 0.0465 | 0.0265 | 0.3875 | 0.0497 | 0.0331 | 0.4833 | 0.0619 | 0.0314 | 0.4584 | 0.0588 | 0.1155 | 0.7906 | 0.2016 | 0.1307 | 0.8949 | 0.2281 | 0.1236 | 0.8460 | 0.2157 | 0.1159 | 0.7935 | 0.2023 | 0.1578 | 0.6394 | 0.2532 | 0.1730 | 0.7009 | 0.2775 | 0.1713 | 0.6939 | 0.2747 | 0.1617 | 0.6552 | 0.2594 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.1611 | 0.7910 | 0.2677 | 0.1860 | 0.9129 | 0.3090 | 0.1738 | 0.8534 | 0.2889 | 0.1614 | 0.7921 | 0.2681 | 0.0916 | 0.3369 | 0.1441 | 0.1518 | 0.5580 | 0.2386 | 0.1240 | 0.4559 | 0.1950 | 0.0919 | 0.3378 | 0.1445 | 0.0785 | 0.7811 | 0.1426 | 0.0885 | 0.8810 | 0.1608 | 0.0842 | 0.8381 | 0.1530 | 0.0790 | 0.7867 | 0.1436 | 0.0444 | 0.5450 | 0.0821 | 0.0491 | 0.6027 | 0.0908 | 0.0469 | 0.5750 | 0.0866 | 0.0445 | 0.5457 | 0.0822 | 0.2007 | 0.8422 | 0.3241 | 0.2143 | 0.8995 | 0.3461 | 0.2093 | 0.8784 | 0.3380 | 0.2022 | 0.8487 | 0.3266 | 0.0886 | 0.7594 | 0.1587 | 0.1004 | 0.8607 | 0.1799 | 0.0956 | 0.8196 | 0.1713 | 0.0897 | 0.7684 | 0.1606 | 0.1698 | 0.7026 | 0.2735 | 0.1811 | 0.7494 | 0.2917 | 0.1768 | 0.7315 | 0.2847 | 0.1705 | 0.7058 | 0.2747 | 0.1835 | 0.7644 | 0.2959 | 0.1995 | 0.8315 | 0.3219 | 0.1923 | 0.8012 | 0.3101 | 0.1839 | 0.7663 | 0.2966 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.2405 | 0.8854 | 0.3783 | 0.2444 | 0.8996 | 0.3843 | 0.2439 | 0.8980 | 0.3836 | 0.2419 | 0.8904 | 0.3804 | 0.0453 | 0.7469 | 0.0855 | 0.0461 | 0.7600 | 0.0870 | 0.0478 | 0.7879 | 0.0902 | 0.0472 | 0.7773 | 0.0890 | 0.1625 | 0.8618 | 0.2735 | 0.1720 | 0.9118 | 0.2894 | 0.1674 | 0.8873 | 0.2816 | 0.1626 | 0.8621 | 0.2736 | 0.2401 | 0.8980 | 0.3789 | 0.2447 | 0.9152 | 0.3861 | 0.2453 | 0.9177 | 0.3872 | 0.2427 | 0.9079 | 0.3830 | 0.1757 | 0.6890 | 0.2799 | 0.2214 | 0.8686 | 0.3529 | 0.2019 | 0.7919 | 0.3218 | 0.1771 | 0.6945 | 0.2822 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2257 | 0.8147 | 0.3535 | 0.2277 | 0.8220 | 0.3567 | 0.2298 | 0.8294 | 0.3598 | 0.2286 | 0.8253 | 0.3581 | 0.1343 | 0.8579 | 0.2323 | 0.1359 | 0.8679 | 0.2350 | 0.1383 | 0.8832 | 0.2391 | 0.1372 | 0.8761 | 0.2372 | 0.1489 | 0.6780 | 0.2442 | 0.1639 | 0.7464 | 0.2688 | 0.1581 | 0.7200 | 0.2593 | 0.1502 | 0.6841 | 0.2464 | 0.0437 | 0.5966 | 0.0815 | 0.0469 | 0.6402 | 0.0874 | 0.0504 | 0.6873 | 0.0938 | 0.0484 | 0.6609 | 0.0902 | 0.0900 | 0.5174 | 0.1533 | 0.0906 | 0.5207 | 0.1543 | 0.0904 | 0.5196 | 0.1540 | 0.0900 | 0.5176 | 0.1534 | 0.0242 | 0.4014 | 0.0456 | 0.0261 | 0.4333 | 0.0492 | 0.0312 | 0.5186 | 0.0589 | 0.0295 | 0.4895 | 0.0556 | 0.1894 | 0.7602 | 0.3033 | 0.2261 | 0.9073 | 0.3620 | 0.2102 | 0.8435 | 0.3365 | 0.1906 | 0.7649 | 0.3052 | 0.2140 | 0.8967 | 0.3455 | 0.2206 | 0.9243 | 0.3561 | 0.2191 | 0.9180 | 0.3537 | 0.2157 | 0.9040 | 0.3483 | 0.1220 | 0.8441 | 0.2133 | 0.1236 | 0.8550 | 0.2160 | 0.1230 | 0.8507 | 0.2149 | 0.1222 | 0.8452 | 0.2135 | 0.2027 | 0.8300 | 0.3259 | 0.2167 | 0.8874 | 0.3484 | 0.2153 | 0.8814 | 0.3460 | 0.2073 | 0.8489 | 0.3333 | | 0.0728 | 5.0 | 90275 | 0.2844 | 0.7534 | 0.7249 | 0.7389 | 0.8383 | 0.8066 | 0.8221 | 0.8087 | 0.7782 | 0.7932 | 0.7633 | 0.7346 | 0.7487 | 0.2187 | 0.9117 | 0.3528 | 0.2249 | 0.9376 | 0.3628 | 0.2245 | 0.9357 | 0.3621 | 0.2206 | 0.9197 | 0.3559 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0022 | 0.2401 | 0.0044 | 0.0025 | 0.2753 | 0.0050 | 0.0043 | 0.4670 | 0.0085 | 0.0039 | 0.4295 | 0.0078 | 0.0122 | 0.3517 | 0.0236 | 0.0144 | 0.4149 | 0.0278 | 0.0165 | 0.4767 | 0.0320 | 0.0149 | 0.4305 | 0.0289 | 0.1541 | 0.5362 | 0.2394 | 0.2163 | 0.7527 | 0.3360 | 0.1873 | 0.6517 | 0.2909 | 0.1551 | 0.5397 | 0.2409 | 0.0050 | 0.2975 | 0.0098 | 0.0078 | 0.4704 | 0.0154 | 0.0064 | 0.3857 | 0.0126 | 0.0050 | 0.2975 | 0.0098 | 0.1147 | 0.4477 | 0.1826 | 0.1771 | 0.6913 | 0.2820 | 0.1484 | 0.5792 | 0.2362 | 0.1154 | 0.4505 | 0.1837 | 0.1199 | 0.6735 | 0.2036 | 0.1278 | 0.7176 | 0.2170 | 0.1291 | 0.7249 | 0.2192 | 0.1247 | 0.7001 | 0.2117 | 0.0199 | 0.3916 | 0.0379 | 0.0217 | 0.4270 | 0.0413 | 0.0255 | 0.5018 | 0.0485 | 0.0242 | 0.4768 | 0.0461 | 0.1159 | 0.7992 | 0.2024 | 0.1300 | 0.8968 | 0.2271 | 0.1233 | 0.8506 | 0.2154 | 0.1161 | 0.8010 | 0.2029 | 0.1577 | 0.6502 | 0.2538 | 0.1714 | 0.7068 | 0.2759 | 0.1687 | 0.6956 | 0.2715 | 0.1604 | 0.6613 | 0.2581 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.1616 | 0.7899 | 0.2683 | 0.1854 | 0.9065 | 0.3078 | 0.1738 | 0.8498 | 0.2886 | 0.1617 | 0.7908 | 0.2686 | 0.0889 | 0.3295 | 0.1400 | 0.1512 | 0.5603 | 0.2381 | 0.1218 | 0.4514 | 0.1918 | 0.0893 | 0.3310 | 0.1407 | 0.0775 | 0.7944 | 0.1412 | 0.0868 | 0.8894 | 0.1581 | 0.0828 | 0.8489 | 0.1509 | 0.0780 | 0.7999 | 0.1422 | 0.0415 | 0.5477 | 0.0771 | 0.0466 | 0.6160 | 0.0867 | 0.0441 | 0.5828 | 0.0821 | 0.0415 | 0.5482 | 0.0772 | 0.2003 | 0.8460 | 0.3239 | 0.2134 | 0.9014 | 0.3452 | 0.2085 | 0.8808 | 0.3372 | 0.2018 | 0.8521 | 0.3263 | 0.0869 | 0.7712 | 0.1563 | 0.0981 | 0.8706 | 0.1764 | 0.0933 | 0.8278 | 0.1677 | 0.0876 | 0.7772 | 0.1575 | 0.1697 | 0.6965 | 0.2729 | 0.1806 | 0.7412 | 0.2905 | 0.1767 | 0.7250 | 0.2841 | 0.1707 | 0.7005 | 0.2745 | 0.1821 | 0.7633 | 0.2941 | 0.1991 | 0.8346 | 0.3215 | 0.1914 | 0.8023 | 0.3091 | 0.1826 | 0.7654 | 0.2948 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.2419 | 0.8962 | 0.3809 | 0.2444 | 0.9057 | 0.3850 | 0.2451 | 0.9082 | 0.3860 | 0.2437 | 0.9029 | 0.3838 | 0.0480 | 0.6727 | 0.0896 | 0.0498 | 0.6975 | 0.0929 | 0.0522 | 0.7310 | 0.0974 | 0.0510 | 0.7142 | 0.0952 | 0.1605 | 0.8630 | 0.2706 | 0.1702 | 0.9156 | 0.2871 | 0.1654 | 0.8896 | 0.2790 | 0.1605 | 0.8633 | 0.2707 | 0.2402 | 0.8893 | 0.3782 | 0.2447 | 0.9061 | 0.3853 | 0.2468 | 0.9137 | 0.3886 | 0.2441 | 0.9039 | 0.3844 | 0.1727 | 0.6889 | 0.2762 | 0.2198 | 0.8770 | 0.3516 | 0.1993 | 0.7950 | 0.3187 | 0.1738 | 0.6934 | 0.2780 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2209 | 0.8393 | 0.3498 | 0.2222 | 0.8442 | 0.3518 | 0.2246 | 0.8533 | 0.3556 | 0.2239 | 0.8507 | 0.3545 | 0.1375 | 0.8534 | 0.2369 | 0.1394 | 0.8648 | 0.2400 | 0.1426 | 0.8851 | 0.2457 | 0.1412 | 0.8760 | 0.2431 | 0.1498 | 0.6601 | 0.2442 | 0.1657 | 0.7300 | 0.2701 | 0.1598 | 0.7038 | 0.2604 | 0.1513 | 0.6666 | 0.2466 | 0.0485 | 0.5827 | 0.0896 | 0.0516 | 0.6199 | 0.0953 | 0.0559 | 0.6712 | 0.1032 | 0.0538 | 0.6456 | 0.0993 | 0.0864 | 0.5044 | 0.1475 | 0.0867 | 0.5060 | 0.1480 | 0.0866 | 0.5056 | 0.1479 | 0.0864 | 0.5045 | 0.1476 | 0.0236 | 0.4874 | 0.0451 | 0.0251 | 0.5179 | 0.0479 | 0.0287 | 0.5915 | 0.0547 | 0.0276 | 0.5683 | 0.0526 | 0.1874 | 0.7546 | 0.3002 | 0.2243 | 0.9035 | 0.3594 | 0.2082 | 0.8386 | 0.3336 | 0.1885 | 0.7590 | 0.3020 | 0.2142 | 0.8993 | 0.3460 | 0.2194 | 0.9212 | 0.3544 | 0.2184 | 0.9171 | 0.3528 | 0.2158 | 0.9060 | 0.3486 | 0.1215 | 0.8486 | 0.2125 | 0.1231 | 0.8600 | 0.2154 | 0.1225 | 0.8554 | 0.2143 | 0.1216 | 0.8497 | 0.2128 | 0.2020 | 0.8351 | 0.3253 | 0.2159 | 0.8926 | 0.3477 | 0.2133 | 0.8819 | 0.3435 | 0.2056 | 0.8499 | 0.3310 | | 0.065 | 6.0 | 108330 | 0.2932 | 0.7534 | 0.7213 | 0.7370 | 0.8389 | 0.8032 | 0.8206 | 0.8096 | 0.7751 | 0.7920 | 0.7637 | 0.7312 | 0.7471 | 0.2199 | 0.9129 | 0.3544 | 0.2267 | 0.9411 | 0.3653 | 0.2258 | 0.9375 | 0.3639 | 0.2215 | 0.9198 | 0.3571 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0029 | 0.2356 | 0.0058 | 0.0033 | 0.2636 | 0.0064 | 0.0053 | 0.4292 | 0.0105 | 0.0049 | 0.3937 | 0.0096 | 0.0133 | 0.3521 | 0.0257 | 0.0148 | 0.3915 | 0.0285 | 0.0181 | 0.4779 | 0.0348 | 0.0170 | 0.4484 | 0.0327 | 0.1578 | 0.5186 | 0.2419 | 0.2217 | 0.7287 | 0.3400 | 0.1923 | 0.6321 | 0.2949 | 0.1590 | 0.5226 | 0.2438 | 0.0058 | 0.3287 | 0.0113 | 0.0083 | 0.4739 | 0.0163 | 0.0072 | 0.4094 | 0.0141 | 0.0059 | 0.3345 | 0.0115 | 0.1178 | 0.4479 | 0.1865 | 0.1816 | 0.6903 | 0.2875 | 0.1523 | 0.5791 | 0.2412 | 0.1181 | 0.4489 | 0.1870 | 0.1188 | 0.6811 | 0.2023 | 0.1268 | 0.7268 | 0.2159 | 0.1284 | 0.7364 | 0.2187 | 0.1239 | 0.7102 | 0.2109 | 0.0203 | 0.3765 | 0.0384 | 0.0221 | 0.4099 | 0.0418 | 0.0256 | 0.4761 | 0.0486 | 0.0242 | 0.4492 | 0.0459 | 0.1167 | 0.8092 | 0.2040 | 0.1304 | 0.9045 | 0.2280 | 0.1236 | 0.8573 | 0.2161 | 0.1167 | 0.8094 | 0.2040 | 0.1579 | 0.6398 | 0.2533 | 0.1735 | 0.7028 | 0.2783 | 0.1700 | 0.6885 | 0.2726 | 0.1611 | 0.6526 | 0.2584 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.1621 | 0.7928 | 0.2692 | 0.1870 | 0.9145 | 0.3105 | 0.1749 | 0.8553 | 0.2904 | 0.1624 | 0.7940 | 0.2696 | 0.0909 | 0.3204 | 0.1416 | 0.1543 | 0.5440 | 0.2404 | 0.1245 | 0.4391 | 0.1941 | 0.0913 | 0.3218 | 0.1422 | 0.0768 | 0.7806 | 0.1399 | 0.0880 | 0.8937 | 0.1602 | 0.0833 | 0.8465 | 0.1517 | 0.0776 | 0.7884 | 0.1413 | 0.0416 | 0.5404 | 0.0772 | 0.0467 | 0.6070 | 0.0867 | 0.0442 | 0.5744 | 0.0821 | 0.0416 | 0.5412 | 0.0773 | 0.2014 | 0.8367 | 0.3247 | 0.2148 | 0.8923 | 0.3463 | 0.2098 | 0.8716 | 0.3382 | 0.2029 | 0.8429 | 0.3271 | 0.0887 | 0.7639 | 0.1589 | 0.1003 | 0.8640 | 0.1798 | 0.0957 | 0.8243 | 0.1715 | 0.0897 | 0.7729 | 0.1608 | 0.1703 | 0.6939 | 0.2734 | 0.1816 | 0.7403 | 0.2917 | 0.1775 | 0.7236 | 0.2851 | 0.1712 | 0.6980 | 0.2750 | 0.1842 | 0.7793 | 0.2980 | 0.2003 | 0.8471 | 0.3240 | 0.1930 | 0.8162 | 0.3121 | 0.1845 | 0.7805 | 0.2985 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.2416 | 0.9048 | 0.3814 | 0.2448 | 0.9168 | 0.3864 | 0.2452 | 0.9182 | 0.3870 | 0.2434 | 0.9117 | 0.3843 | 0.0469 | 0.6678 | 0.0877 | 0.0482 | 0.6858 | 0.0901 | 0.0503 | 0.7157 | 0.0940 | 0.0494 | 0.7029 | 0.0923 | 0.1620 | 0.8692 | 0.2731 | 0.1712 | 0.9185 | 0.2886 | 0.1667 | 0.8942 | 0.2810 | 0.1621 | 0.8695 | 0.2732 | 0.2405 | 0.8949 | 0.3792 | 0.2448 | 0.9107 | 0.3858 | 0.2462 | 0.9160 | 0.3881 | 0.2436 | 0.9064 | 0.3840 | 0.1763 | 0.6960 | 0.2813 | 0.2212 | 0.8733 | 0.3530 | 0.2018 | 0.7966 | 0.3220 | 0.1774 | 0.7003 | 0.2831 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2250 | 0.8269 | 0.3537 | 0.2265 | 0.8325 | 0.3561 | 0.2294 | 0.8433 | 0.3607 | 0.2285 | 0.8401 | 0.3593 | 0.1390 | 0.8504 | 0.2389 | 0.1406 | 0.8604 | 0.2417 | 0.1438 | 0.8800 | 0.2473 | 0.1424 | 0.8711 | 0.2448 | 0.1479 | 0.6702 | 0.2424 | 0.1638 | 0.7423 | 0.2684 | 0.1575 | 0.7135 | 0.2580 | 0.1492 | 0.6758 | 0.2444 | 0.0486 | 0.5731 | 0.0895 | 0.0516 | 0.6091 | 0.0952 | 0.0559 | 0.6601 | 0.1031 | 0.0539 | 0.6367 | 0.0995 | 0.0844 | 0.5106 | 0.1449 | 0.0847 | 0.5122 | 0.1453 | 0.0847 | 0.5120 | 0.1453 | 0.0845 | 0.5108 | 0.1450 | 0.0280 | 0.4617 | 0.0528 | 0.0289 | 0.4770 | 0.0545 | 0.0348 | 0.5746 | 0.0657 | 0.0338 | 0.5573 | 0.0637 | 0.1904 | 0.7649 | 0.3049 | 0.2262 | 0.9086 | 0.3622 | 0.2109 | 0.8473 | 0.3377 | 0.1916 | 0.7696 | 0.3068 | 0.2154 | 0.8975 | 0.3474 | 0.2215 | 0.9228 | 0.3572 | 0.2199 | 0.9164 | 0.3547 | 0.2168 | 0.9034 | 0.3497 | 0.1211 | 0.8243 | 0.2111 | 0.1230 | 0.8374 | 0.2145 | 0.1222 | 0.8322 | 0.2132 | 0.1213 | 0.8257 | 0.2115 | 0.2022 | 0.8303 | 0.3252 | 0.2176 | 0.8935 | 0.3500 | 0.2146 | 0.8811 | 0.3451 | 0.2061 | 0.8464 | 0.3315 | | 0.059 | 7.0 | 126385 | 0.2949 | 0.7519 | 0.7179 | 0.7345 | 0.8381 | 0.8003 | 0.8188 | 0.8083 | 0.7718 | 0.7896 | 0.7621 | 0.7277 | 0.7445 | 0.2195 | 0.9170 | 0.3542 | 0.2254 | 0.9418 | 0.3638 | 0.2248 | 0.9392 | 0.3628 | 0.2211 | 0.9238 | 0.3569 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0001 | 0.2857 | 0.0002 | 0.0000 | 0.0952 | 0.0001 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0037 | 0.2085 | 0.0072 | 0.0041 | 0.2350 | 0.0081 | 0.0071 | 0.4032 | 0.0139 | 0.0064 | 0.3641 | 0.0126 | 0.0135 | 0.3337 | 0.0260 | 0.0150 | 0.3706 | 0.0289 | 0.0186 | 0.4588 | 0.0357 | 0.0175 | 0.4312 | 0.0336 | 0.1554 | 0.5192 | 0.2392 | 0.2191 | 0.7320 | 0.3372 | 0.1893 | 0.6324 | 0.2913 | 0.1564 | 0.5226 | 0.2408 | 0.0063 | 0.3291 | 0.0124 | 0.0085 | 0.4420 | 0.0166 | 0.0075 | 0.3914 | 0.0147 | 0.0064 | 0.3323 | 0.0125 | 0.1139 | 0.4323 | 0.1803 | 0.1777 | 0.6745 | 0.2813 | 0.1478 | 0.5612 | 0.2340 | 0.1143 | 0.4340 | 0.1810 | 0.1148 | 0.6764 | 0.1962 | 0.1234 | 0.7273 | 0.2110 | 0.1237 | 0.7292 | 0.2116 | 0.1189 | 0.7008 | 0.2033 | 0.0219 | 0.3729 | 0.0415 | 0.0233 | 0.3950 | 0.0439 | 0.0282 | 0.4797 | 0.0533 | 0.0269 | 0.4568 | 0.0508 | 0.1148 | 0.8006 | 0.2008 | 0.1293 | 0.9018 | 0.2262 | 0.1221 | 0.8515 | 0.2136 | 0.1148 | 0.8007 | 0.2009 | 0.1573 | 0.6469 | 0.2531 | 0.1719 | 0.7066 | 0.2765 | 0.1685 | 0.6928 | 0.2711 | 0.1600 | 0.6580 | 0.2575 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.1583 | 0.7716 | 0.2627 | 0.1857 | 0.9053 | 0.3082 | 0.1723 | 0.8399 | 0.2859 | 0.1585 | 0.7728 | 0.2631 | 0.0867 | 0.3124 | 0.1357 | 0.1490 | 0.5371 | 0.2333 | 0.1200 | 0.4326 | 0.1878 | 0.0870 | 0.3138 | 0.1362 | 0.0777 | 0.7775 | 0.1413 | 0.0886 | 0.8861 | 0.1611 | 0.0838 | 0.8382 | 0.1524 | 0.0782 | 0.7823 | 0.1422 | 0.0404 | 0.5421 | 0.0752 | 0.0452 | 0.6058 | 0.0841 | 0.0429 | 0.5750 | 0.0798 | 0.0405 | 0.5429 | 0.0753 | 0.2009 | 0.8425 | 0.3245 | 0.2139 | 0.8968 | 0.3454 | 0.2090 | 0.8766 | 0.3376 | 0.2024 | 0.8486 | 0.3268 | 0.0881 | 0.7564 | 0.1579 | 0.1002 | 0.8596 | 0.1794 | 0.0953 | 0.8180 | 0.1707 | 0.0891 | 0.7645 | 0.1596 | 0.1688 | 0.6958 | 0.2717 | 0.1798 | 0.7411 | 0.2895 | 0.1758 | 0.7245 | 0.2830 | 0.1697 | 0.6995 | 0.2732 | 0.1819 | 0.7664 | 0.2940 | 0.1991 | 0.8390 | 0.3219 | 0.1913 | 0.8058 | 0.3092 | 0.1822 | 0.7675 | 0.2944 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2421 | 0.9070 | 0.3822 | 0.2446 | 0.9163 | 0.3861 | 0.2451 | 0.9182 | 0.3869 | 0.2437 | 0.9130 | 0.3847 | 0.0471 | 0.6765 | 0.0880 | 0.0485 | 0.6973 | 0.0907 | 0.0504 | 0.7248 | 0.0943 | 0.0495 | 0.7112 | 0.0925 | 0.1617 | 0.8717 | 0.2728 | 0.1705 | 0.9189 | 0.2876 | 0.1662 | 0.8956 | 0.2803 | 0.1618 | 0.8719 | 0.2729 | 0.2397 | 0.8892 | 0.3777 | 0.2442 | 0.9058 | 0.3847 | 0.2456 | 0.9108 | 0.3868 | 0.2428 | 0.9006 | 0.3825 | 0.1729 | 0.6750 | 0.2753 | 0.2202 | 0.8598 | 0.3506 | 0.2000 | 0.7808 | 0.3184 | 0.1742 | 0.6803 | 0.2774 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2315 | 0.8246 | 0.3616 | 0.2330 | 0.8299 | 0.3638 | 0.2356 | 0.8390 | 0.3678 | 0.2347 | 0.8358 | 0.3665 | 0.1378 | 0.8575 | 0.2374 | 0.1393 | 0.8673 | 0.2401 | 0.1424 | 0.8861 | 0.2453 | 0.1410 | 0.8776 | 0.2430 | 0.0484 | 0.5757 | 0.0893 | 0.0515 | 0.6127 | 0.0951 | 0.0564 | 0.6706 | 0.1041 | 0.0545 | 0.6485 | 0.1006 | 0.1460 | 0.6676 | 0.2396 | 0.1602 | 0.7327 | 0.2630 | 0.1549 | 0.7083 | 0.2542 | 0.1474 | 0.6741 | 0.2419 | 0.0842 | 0.5120 | 0.1446 | 0.0846 | 0.5143 | 0.1453 | 0.0846 | 0.5147 | 0.1454 | 0.0842 | 0.5122 | 0.1447 | 0.0248 | 0.4616 | 0.0470 | 0.0259 | 0.4825 | 0.0491 | 0.0304 | 0.5671 | 0.0578 | 0.0295 | 0.5506 | 0.0561 | 0.1877 | 0.7545 | 0.3006 | 0.2248 | 0.9036 | 0.3601 | 0.2089 | 0.8395 | 0.3345 | 0.1888 | 0.7589 | 0.3024 | 0.2144 | 0.8910 | 0.3457 | 0.2210 | 0.9183 | 0.3563 | 0.2193 | 0.9112 | 0.3535 | 0.2159 | 0.8973 | 0.3481 | 0.1208 | 0.8259 | 0.2107 | 0.1226 | 0.8382 | 0.2139 | 0.1220 | 0.8346 | 0.2130 | 0.1211 | 0.8284 | 0.2114 | 0.2010 | 0.8310 | 0.3238 | 0.2164 | 0.8945 | 0.3485 | 0.2135 | 0.8824 | 0.3438 | 0.2048 | 0.8468 | 0.3299 | | 0.0567 | 8.0 | 144440 | 0.3047 | 0.7507 | 0.7167 | 0.7333 | 0.8382 | 0.8003 | 0.8188 | 0.8076 | 0.7711 | 0.7889 | 0.7608 | 0.7264 | 0.7432 | 0.2192 | 0.9137 | 0.3536 | 0.2254 | 0.9395 | 0.3636 | 0.2248 | 0.9369 | 0.3625 | 0.2209 | 0.9208 | 0.3563 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0002 | 0.4211 | 0.0003 | 0.0001 | 0.2632 | 0.0002 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0 | 0 | 0 | 0.0033 | 0.2074 | 0.0064 | 0.0036 | 0.2292 | 0.0071 | 0.0062 | 0.3905 | 0.0121 | 0.0057 | 0.3598 | 0.0112 | 0.0124 | 0.3330 | 0.0238 | 0.0142 | 0.3835 | 0.0275 | 0.0172 | 0.4629 | 0.0331 | 0.0159 | 0.4276 | 0.0306 | 0.1505 | 0.5088 | 0.2323 | 0.2157 | 0.7295 | 0.3330 | 0.1851 | 0.6259 | 0.2857 | 0.1514 | 0.5121 | 0.2337 | 0.0061 | 0.3566 | 0.0119 | 0.0083 | 0.4846 | 0.0162 | 0.0073 | 0.4277 | 0.0143 | 0.0062 | 0.3614 | 0.0121 | 0.1132 | 0.4311 | 0.1794 | 0.1774 | 0.6753 | 0.2810 | 0.1474 | 0.5611 | 0.2335 | 0.1136 | 0.4325 | 0.1800 | 0.1161 | 0.6701 | 0.1979 | 0.1247 | 0.7199 | 0.2126 | 0.1260 | 0.7275 | 0.2148 | 0.1213 | 0.7004 | 0.2069 | 0.0207 | 0.3902 | 0.0393 | 0.0221 | 0.4170 | 0.0420 | 0.0265 | 0.5002 | 0.0504 | 0.0253 | 0.4763 | 0.0480 | 0.1149 | 0.7952 | 0.2007 | 0.1294 | 0.8959 | 0.2261 | 0.1222 | 0.8461 | 0.2136 | 0.1149 | 0.7954 | 0.2008 | 0.1571 | 0.6394 | 0.2522 | 0.1728 | 0.7034 | 0.2774 | 0.1690 | 0.6880 | 0.2714 | 0.1598 | 0.6504 | 0.2565 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.1589 | 0.7770 | 0.2639 | 0.1852 | 0.9052 | 0.3074 | 0.1724 | 0.8427 | 0.2862 | 0.1592 | 0.7782 | 0.2643 | 0.0876 | 0.3153 | 0.1371 | 0.1514 | 0.5452 | 0.2370 | 0.1214 | 0.4372 | 0.1901 | 0.0881 | 0.3171 | 0.1379 | 0.0786 | 0.7768 | 0.1427 | 0.0889 | 0.8793 | 0.1615 | 0.0847 | 0.8372 | 0.1538 | 0.0794 | 0.7850 | 0.1442 | 0.0421 | 0.5510 | 0.0783 | 0.0470 | 0.6141 | 0.0873 | 0.0447 | 0.5842 | 0.0830 | 0.0423 | 0.5525 | 0.0785 | 0.2015 | 0.8417 | 0.3252 | 0.2144 | 0.8956 | 0.3460 | 0.2097 | 0.8758 | 0.3384 | 0.2031 | 0.8482 | 0.3277 | 0.0887 | 0.7604 | 0.1588 | 0.1004 | 0.8608 | 0.1797 | 0.0956 | 0.8204 | 0.1713 | 0.0896 | 0.7683 | 0.1604 | 0.1696 | 0.6972 | 0.2729 | 0.1809 | 0.7437 | 0.2911 | 0.1768 | 0.7266 | 0.2844 | 0.1705 | 0.7007 | 0.2742 | 0.1822 | 0.7682 | 0.2945 | 0.1984 | 0.8366 | 0.3208 | 0.1911 | 0.8056 | 0.3089 | 0.1826 | 0.7699 | 0.2952 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0000 | 0.3333 | 0.0000 | 0.0 | 0.0 | 0 | 0.2401 | 0.9053 | 0.3795 | 0.2436 | 0.9186 | 0.3851 | 0.2437 | 0.9189 | 0.3853 | 0.2418 | 0.9116 | 0.3822 | 0.0463 | 0.6994 | 0.0868 | 0.0475 | 0.7181 | 0.0891 | 0.0489 | 0.7401 | 0.0918 | 0.0482 | 0.7289 | 0.0904 | 0.1616 | 0.8715 | 0.2727 | 0.1703 | 0.9185 | 0.2874 | 0.1660 | 0.8953 | 0.2801 | 0.1617 | 0.8717 | 0.2727 | 0.2382 | 0.8892 | 0.3758 | 0.2430 | 0.9068 | 0.3832 | 0.2442 | 0.9113 | 0.3851 | 0.2412 | 0.9004 | 0.3805 | 0.1722 | 0.6711 | 0.2740 | 0.2202 | 0.8583 | 0.3505 | 0.1993 | 0.7770 | 0.3173 | 0.1734 | 0.6758 | 0.2760 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2261 | 0.8231 | 0.3547 | 0.2282 | 0.8309 | 0.3581 | 0.2304 | 0.8388 | 0.3615 | 0.2291 | 0.8342 | 0.3595 | 0.1378 | 0.8544 | 0.2374 | 0.1395 | 0.8646 | 0.2402 | 0.1427 | 0.8843 | 0.2457 | 0.1414 | 0.8764 | 0.2435 | 0.1480 | 0.6662 | 0.2422 | 0.1630 | 0.7340 | 0.2668 | 0.1573 | 0.7080 | 0.2574 | 0.1493 | 0.6722 | 0.2444 | 0.0475 | 0.5826 | 0.0879 | 0.0508 | 0.6226 | 0.0939 | 0.0549 | 0.6731 | 0.1015 | 0.0528 | 0.6478 | 0.0977 | 0.0865 | 0.5113 | 0.1480 | 0.0868 | 0.5129 | 0.1485 | 0.0868 | 0.5127 | 0.1484 | 0.0866 | 0.5116 | 0.1481 | 0.0260 | 0.4270 | 0.0490 | 0.0273 | 0.4483 | 0.0514 | 0.0321 | 0.5273 | 0.0605 | 0.0309 | 0.5084 | 0.0583 | 0.1865 | 0.7494 | 0.2986 | 0.2247 | 0.9029 | 0.3598 | 0.2081 | 0.8364 | 0.3333 | 0.1876 | 0.7540 | 0.3005 | 0.2145 | 0.8946 | 0.3460 | 0.2209 | 0.9214 | 0.3564 | 0.2193 | 0.9145 | 0.3537 | 0.2160 | 0.9009 | 0.3485 | 0.1203 | 0.8195 | 0.2099 | 0.1221 | 0.8313 | 0.2129 | 0.1215 | 0.8272 | 0.2118 | 0.1206 | 0.8213 | 0.2103 | 0.2014 | 0.8308 | 0.3242 | 0.2163 | 0.8922 | 0.3482 | 0.2135 | 0.8806 | 0.3436 | 0.2052 | 0.8463 | 0.3303 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
flax-community/mongolian-gpt2
flax-community
"2021-07-09T12:17:08Z"
4
3
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "mn", "dataset:oscar", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: "mn" thumbnail: "https://avatars.githubusercontent.com/u/43239645?s=60&v=4" tags: - gpt2 datasets: - oscar --- # Mongolian GPT2 Goal is to create a strong language generation model for Mongolian Since initial code and data is pretty much written by @patrickvonplaten and other huggingface members, it should not be so hard to get the first sense. ## Model Randomly initialized GPT2 model ## Datasets We can use OSCAR which is available through datasets ## Datasets A causal language modeling script for Flax is available here 1. It can be used pretty much without any required code changes. If there is time left, I’d love to try some private crawling and integrate it datasets. ## Expected Outcome Understandable Mongolian text generation model ## Challenges Lack of data → OSCAR Mongolian is just 2.2G. Maybe we need to research ways to acquire more data with this.
dt-and-vanilla-ardt/dt-robust_train_halfcheetah_level-3008_1944-99
dt-and-vanilla-ardt
"2023-08-30T21:00:40Z"
31
0
transformers
[ "transformers", "pytorch", "decision_transformer", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
"2023-08-30T18:46:42Z"
--- tags: - generated_from_trainer model-index: - name: dt-robust_train_halfcheetah_level-3008_1944-99 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dt-robust_train_halfcheetah_level-3008_1944-99 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 10000 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.1.0.dev20230727+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
mci29/sn29_z1m1_f1jn
mci29
"2025-02-01T05:55:13Z"
213
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-01T05:50:43Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jasonhuang3/Llama-3-Taiwan-8B-Instruct-unsloth-merged
jasonhuang3
"2025-03-09T06:43:31Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:yentinglin/Llama-3-Taiwan-8B-Instruct", "base_model:finetune:yentinglin/Llama-3-Taiwan-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-09T06:40:14Z"
--- base_model: yentinglin/Llama-3-Taiwan-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jasonhuang3 - **License:** apache-2.0 - **Finetuned from model :** yentinglin/Llama-3-Taiwan-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
JBhug/roberta-base_wifi
JBhug
"2024-05-08T15:20:50Z"
109
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-08T15:18:34Z"
--- license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base_wifi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_wifi This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3180 - Accuracy: 0.9429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 27 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.494 | 1.67 | 10 | 0.3959 | 0.8286 | | 0.1178 | 3.33 | 20 | 0.0705 | 0.9714 | | 0.0033 | 5.0 | 30 | 0.2025 | 0.9429 | | 0.0009 | 6.67 | 40 | 0.2312 | 0.9714 | | 0.0003 | 8.33 | 50 | 0.2707 | 0.9429 | | 0.0003 | 10.0 | 60 | 0.3006 | 0.9429 | | 0.0002 | 11.67 | 70 | 0.3127 | 0.9429 | | 0.0002 | 13.33 | 80 | 0.3170 | 0.9429 | | 0.0002 | 15.0 | 90 | 0.3180 | 0.9429 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
lixiqi/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-7e-05
lixiqi
"2023-01-09T17:51:05Z"
183
0
transformers
[ "transformers", "pytorch", "tensorboard", "beit", "image-classification", "generated_from_trainer", "dataset:image_folder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-01-09T09:14:42Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-7e-05 results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: Accuracy type: accuracy value: 0.7220674282529953 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-7e-05 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.7881 - Accuracy: 0.7221 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2307 | 1.0 | 224 | 1.0863 | 0.5874 | | 1.0893 | 2.0 | 448 | 0.9700 | 0.6362 | | 1.0244 | 3.0 | 672 | 0.8859 | 0.6757 | | 1.016 | 4.0 | 896 | 0.8804 | 0.6787 | | 0.9089 | 5.0 | 1120 | 0.8611 | 0.6897 | | 0.8935 | 6.0 | 1344 | 0.8283 | 0.7028 | | 0.8403 | 7.0 | 1568 | 0.8116 | 0.7102 | | 0.8179 | 8.0 | 1792 | 0.7934 | 0.7166 | | 0.7764 | 9.0 | 2016 | 0.7865 | 0.7208 | | 0.771 | 10.0 | 2240 | 0.7881 | 0.7221 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
A7m0d/whisper-tiny
A7m0d
"2024-10-02T17:51:54Z"
75
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-10-02T17:27:59Z"
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: fr-FR split: train args: fr-FR metrics: - name: Wer type: wer value: 41.85185185185185 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6370 - Wer: 41.8519 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.0019 | 17.2414 | 500 | 0.6370 | 41.8519 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
SLPL/Hubert-base-ShEMO
SLPL
"2024-06-10T12:52:49Z"
138
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:shemo", "base_model:facebook/hubert-base-ls960", "base_model:finetune:facebook/hubert-base-ls960", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
"2024-06-10T10:54:20Z"
--- license: apache-2.0 base_model: facebook/hubert-base-ls960 tags: - generated_from_trainer datasets: - shemo metrics: - f1 model-index: - name: results results: - task: name: Audio Classification type: audio-classification dataset: name: shemo type: shemo config: clean split: None args: clean metrics: - name: F1 type: f1 value: 0.8335174497965196 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the shemo dataset. It achieves the following results on the evaluation set: - Loss: 0.6161 - F1: 0.8335 ## Labels description - 0 : anger - 1 : happiness - 2 : neutral - 3 : sadness ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1127 | 1.0 | 154 | 0.9244 | 0.3968 | | 0.6982 | 2.0 | 308 | 0.5642 | 0.6435 | | 0.6246 | 3.0 | 462 | 0.5049 | 0.6273 | | 0.5097 | 4.0 | 616 | 0.4282 | 0.7246 | | 0.4496 | 5.0 | 770 | 0.3280 | 0.8158 | | 0.4476 | 6.0 | 924 | 0.4663 | 0.7978 | | 0.2212 | 7.0 | 1078 | 0.3253 | 0.8641 | | 0.1548 | 8.0 | 1232 | 0.9445 | 0.7420 | | 0.3829 | 9.0 | 1386 | 0.7194 | 0.7880 | | 0.0773 | 10.0 | 1540 | 0.5301 | 0.8657 | | 0.2481 | 11.0 | 1694 | 0.5321 | 0.8812 | | 0.0597 | 12.0 | 1848 | 0.6161 | 0.8335 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2
Zoyd
"2024-06-04T16:49:54Z"
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
"2024-06-04T16:36:05Z"
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: What's the difference between a banana and a strawberry? --- **Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**</center> | <center>1217 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**</center> | <center>1342 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**</center> | <center>1558 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**</center> | <center>1774 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**</center> | <center>1882 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**</center> | <center>1990 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**</center> | <center>2099 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**</center> | <center>2423 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**</center> | <center>2870 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**</center> | <center>3089 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**</center> | <center>3620 MB</center> | <center>8</center> | # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) ## What's this? Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series. ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.
appvoid/merging-7
appvoid
"2024-05-06T04:25:56Z"
138
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:appvoid/palmer-003", "base_model:finetune:appvoid/palmer-003", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-06T04:25:04Z"
--- base_model: - appvoid/palmer-003 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: appvoid/palmer-003 layer_range: [0, 10] - sources: - model: appvoid/palmer-003 layer_range: [5, 15] - sources: - model: appvoid/palmer-003 layer_range: [10, 20] - sources: - model: appvoid/palmer-003 layer_range: [15, 21] merge_method: passthrough dtype: float16 ```
LightEmbed/baai-llm-embedder-onnx
LightEmbed
"2024-07-07T19:51:59Z"
4
0
light-embed
[ "light-embed", "onnx", "bert", "sentence-transformers", "feature-extraction", "sentence-similarity", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-05-14T19:16:41Z"
--- library_name: light-embed pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # LightEmbed/baai-llm-embedder-onnx This is the ONNX version of the Sentence Transformers model BAAI/llm-embedder for sentence embedding, optimized for speed and lightweight performance. By utilizing onnxruntime and tokenizers instead of heavier libraries like sentence-transformers and transformers, this version ensures a smaller library size and faster execution. Below are the details of the model: - Base model: BAAI/llm-embedder - Embedding dimension: 768 - Max sequence length: 512 - File size on disk: 0.41 GB - Pooling incorporated: Yes This ONNX model consists all components in the original sentence transformer model: Transformer, Pooling, Normalize <!--- Describe your model here --> ## Usage (LightEmbed) Using this model becomes easy when you have [LightEmbed](https://pypi.org/project/light-embed/) installed: ``` pip install -U light-embed ``` Then you can use the model using the original model name like this: ```python from light_embed import TextEmbedding sentences = [ "This is an example sentence", "Each sentence is converted" ] model = TextEmbedding('BAAI/llm-embedder') embeddings = model.encode(sentences) print(embeddings) ``` Then you can use the model using onnx model name like this: ```python from light_embed import TextEmbedding sentences = [ "This is an example sentence", "Each sentence is converted" ] model = TextEmbedding('LightEmbed/baai-llm-embedder-onnx') embeddings = model.encode(sentences) print(embeddings) ``` ## Citing & Authors Binh Nguyen / [email protected]
AIFT/AIFT-instruct-42dot_LLM-SFT-1.3B-dpo
AIFT
"2024-01-30T03:43:59Z"
150
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-30T00:17:21Z"
--- license: cc-by-sa-4.0 --- <h1>AIFT-instruct-42dot_LLM-SFT-1.3B-dpo</h1> <b><학습 데이터 구축></b> <br> kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용. 그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로 해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로 역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고, aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약) 각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경 AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제 영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행 총 데이터 4만개 정도 사용하였습니다. <br> dpo데이터의 경우는 hh-rlhf데이터를 gpt-3.5-turbo를 활용해 답변을 재생성하였습니다 <br> + TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제) + 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습 + 문법관련 학습 데이터 <br> ###학습 데이터 파일은 비공개입니다. <br> <모델> <br> 42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다. <br> <br> <br> <b><학습></b> <br> 학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
mradermacher/Bahasa-4b-GGUF
mradermacher
"2025-01-16T19:23:35Z"
230
0
transformers
[ "transformers", "gguf", "id", "base_model:Bahasalab/Bahasa-4b", "base_model:quantized:Bahasalab/Bahasa-4b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-01-16T19:08:05Z"
--- base_model: Bahasalab/Bahasa-4b language: - id library_name: transformers license: other license_name: tongyi-qianwen quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/Bahasalab/Bahasa-4b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q2_K.gguf) | Q2_K | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.IQ4_XS.gguf) | IQ4_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q4_K_S.gguf) | Q4_K_S | 2.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q5_K_M.gguf) | Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q6_K.gguf) | Q6_K | 3.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.Q8_0.gguf) | Q8_0 | 4.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Bahasa-4b-GGUF/resolve/main/Bahasa-4b.f16.gguf) | f16 | 8.0 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf
RichardErkhov
"2024-06-15T23:39:08Z"
16
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
"2024-06-15T15:54:53Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Stheno-Mega-False-49B-L2 - GGUF - Model creator: https://huggingface.co/Sao10K/ - Original model: https://huggingface.co/Sao10K/Stheno-Mega-False-49B-L2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Stheno-Mega-False-49B-L2.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q2_K.gguf) | Q2_K | 17.01GB | | [Stheno-Mega-False-49B-L2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.IQ3_XS.gguf) | IQ3_XS | 18.8GB | | [Stheno-Mega-False-49B-L2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.IQ3_S.gguf) | IQ3_S | 19.87GB | | [Stheno-Mega-False-49B-L2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q3_K_S.gguf) | Q3_K_S | 19.87GB | | [Stheno-Mega-False-49B-L2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.IQ3_M.gguf) | IQ3_M | 21.04GB | | [Stheno-Mega-False-49B-L2.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q3_K.gguf) | Q3_K | 22.31GB | | [Stheno-Mega-False-49B-L2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q3_K_M.gguf) | Q3_K_M | 22.31GB | | [Stheno-Mega-False-49B-L2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q3_K_L.gguf) | Q3_K_L | 24.46GB | | [Stheno-Mega-False-49B-L2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.IQ4_XS.gguf) | IQ4_XS | 24.73GB | | [Stheno-Mega-False-49B-L2.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q4_0.gguf) | Q4_0 | 25.97GB | | [Stheno-Mega-False-49B-L2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.IQ4_NL.gguf) | IQ4_NL | 26.13GB | | [Stheno-Mega-False-49B-L2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q4_K_S.gguf) | Q4_K_S | 26.14GB | | [Stheno-Mega-False-49B-L2.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q4_K.gguf) | Q4_K | 27.77GB | | [Stheno-Mega-False-49B-L2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q4_K_M.gguf) | Q4_K_M | 27.77GB | | [Stheno-Mega-False-49B-L2.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q4_1.gguf) | Q4_1 | 28.85GB | | [Stheno-Mega-False-49B-L2.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q5_0.gguf) | Q5_0 | 31.72GB | | [Stheno-Mega-False-49B-L2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q5_K_S.gguf) | Q5_K_S | 31.72GB | | [Stheno-Mega-False-49B-L2.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q5_K.gguf) | Q5_K | 32.64GB | | [Stheno-Mega-False-49B-L2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q5_K_M.gguf) | Q5_K_M | 32.64GB | | [Stheno-Mega-False-49B-L2.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/blob/main/Stheno-Mega-False-49B-L2.Q5_1.gguf) | Q5_1 | 34.59GB | | [Stheno-Mega-False-49B-L2.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/tree/main/) | Q6_K | 37.82GB | | [Stheno-Mega-False-49B-L2.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Stheno-Mega-False-49B-L2-gguf/tree/main/) | Q8_0 | 48.98GB | Original model description: --- license: llama2 language: - en --- ... A False-49B Llama2 Model, made from... splicing models together with Mergekit. Purely experimental and for the memes. Better than 70Bs? No. Better than 34bs? Doubt. better than 13b? maybe. hopefully. Why does it exist? Because I don't know. https://huggingface.co/Sao10K/Stheno-Mega-False-49B-L2-GGUF <img src="https://pbs.twimg.com/media/EGWbSqfUwAUTM2F.jpg" style="width: 50%; min-width: 100px; display: block; margin: auto;">
RichardErkhov/saberai_-_Zro1.5_3B-8bits
RichardErkhov
"2024-10-26T12:59:06Z"
5
0
null
[ "safetensors", "gpt_neox", "8-bit", "bitsandbytes", "region:us" ]
null
"2024-10-26T12:56:47Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Zro1.5_3B - bnb 8bits - Model creator: https://huggingface.co/saberai/ - Original model: https://huggingface.co/saberai/Zro1.5_3B/ Original model description: --- license: apache-2.0 --- Introducing Zro1.5_3B by Saber AI – Precision and Performance in a Compact Package, fine-tuned for enhanced reasoning and mathematical skills on low-powered mobile devices. Key Features: Efficient Precision: Zro1.5_3B excels in capturing nuances despite its compact size, making it a powerful Small Language Model (SLM). Resource Optimization: Maximized efficiency without sacrificing performance, ensuring seamless integration into diverse platforms. Adaptability: Customize parameters to tailor the model to your project's specific needs, offering flexibility and versatility. Scalability: Designed to scale effortlessly, from small projects to large-scale applications. Cutting-Edge Technology: Leverage the latest in natural language processing for state-of-the-art performance. Reasoning Enhancement: Fine-tuned in RedPajama-INCITE-Chat-3B-v1 to improve reasoning skills and mathematical reasoning. Mobile Optimization: Tailored for low-powered mobile devices, ensuring optimal performance on the go. User-Friendly Integration: Effortless incorporation with comprehensive documentation and support. Elevate your projects with Zro1.5_3B – where small size meets unparalleled performance, specifically enhancing reasoning and mathematical skills on low-powered mobile devices.
Kaelan/Smollm2-1.7B-COT-GRPO-16BIT
Kaelan
"2025-03-05T16:49:32Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "grpo", "conversational", "en", "base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct", "base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-05T16:48:59Z"
--- base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - grpo license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Kaelan - **License:** apache-2.0 - **Finetuned from model :** HuggingFaceTB/SmolLM2-1.7B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qanastek/biomedical-specialities-classifier-french
qanastek
"2023-03-22T12:33:55Z"
5
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "medical", "chemistry", "biology", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-03-22T11:13:24Z"
--- license: apache-2.0 language: - fr metrics: - accuracy pipeline_tag: text-classification tags: - medical - chemistry - biology ---
chauhoang/fc225955-c09e-48a6-a69c-348b31b424ef
chauhoang
"2025-01-14T03:06:12Z"
12
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.1-Storm-8B", "base_model:adapter:unsloth/Llama-3.1-Storm-8B", "license:llama3.1", "region:us" ]
null
"2025-01-14T02:14:53Z"
--- library_name: peft license: llama3.1 base_model: unsloth/Llama-3.1-Storm-8B tags: - axolotl - generated_from_trainer model-index: - name: fc225955-c09e-48a6-a69c-348b31b424ef results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.1-Storm-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c915f3b5c1e09a30_train_data.json ds_type: json format: custom path: /workspace/input_data/c915f3b5c1e09a30_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: chauhoang/fc225955-c09e-48a6-a69c-348b31b424ef hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/c915f3b5c1e09a30_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b401ef21-c760-4f0a-bbf5-c54631c430a8 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b401ef21-c760-4f0a-bbf5-c54631c430a8 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # fc225955-c09e-48a6-a69c-348b31b424ef This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 0.8080 | | 0.7231 | 0.0013 | 10 | 0.7534 | | 0.689 | 0.0025 | 20 | 0.6732 | | 0.7098 | 0.0038 | 30 | 0.6495 | | 0.6582 | 0.0050 | 40 | 0.6418 | | 0.6529 | 0.0063 | 50 | 0.6407 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
StadlerRob/a2c-AntBulletEnv-v0-100k-722
StadlerRob
"2023-04-03T13:47:38Z"
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-04-03T13:46:34Z"
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 746.30 +/- 76.83 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mradermacher/Kazemi-1.2-Solar-GGUF
mradermacher
"2024-05-06T06:09:38Z"
2
0
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/Kazemi-1.2-Solar", "base_model:quantized:Sao10K/Kazemi-1.2-Solar", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-03-18T14:35:12Z"
--- base_model: Sao10K/Kazemi-1.2-Solar language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/Sao10K/Kazemi-1.2-Solar <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q3_K_S.gguf) | Q3_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q3_K_L.gguf) | Q3_K_L | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.IQ4_XS.gguf) | IQ4_XS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q5_K_S.gguf) | Q5_K_S | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q5_K_M.gguf) | Q5_K_M | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q6_K.gguf) | Q6_K | 9.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Kazemi-1.2-Solar-GGUF/resolve/main/Kazemi-1.2-Solar.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
HachiML/Swallow-7b-hf-CodeSkill
HachiML
"2024-04-05T03:50:30Z"
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-05T03:44:38Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ronarrowhead/ddpm-celebahq-finetuned-butterflies-2epochs
Ronarrowhead
"2023-10-27T09:03:21Z"
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
"2023-10-27T09:03:03Z"
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Ronarrowhead/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
LoneStriker/Qwen1.5-4B-Chat-8.0bpw-h8-exl2
LoneStriker
"2024-02-17T14:08:51Z"
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "arxiv:2309.16609", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-17T14:07:01Z"
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-4B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-4B-Chat", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-4B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-4B-Chat-GPTQ-Int4`, `Qwen1.5-4B-Chat-GPTQ-Int8`, `Qwen1.5-4B-Chat-AWQ`, and `Qwen1.5-4B-Chat-GGUF`. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
FatihC/swin-tiny-patch4-window7-224-finetuned-eurosat-watermark
FatihC
"2023-04-20T10:37:48Z"
195
3
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-04-20T09:46:13Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: images split: train args: images metrics: - name: Accuracy type: accuracy value: 0.9609375 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1211 - Accuracy: 0.9609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 4 | 0.4862 | 0.8516 | | No log | 2.0 | 8 | 0.4103 | 0.8828 | | 0.4518 | 3.0 | 12 | 0.3210 | 0.8984 | | 0.4518 | 4.0 | 16 | 0.2053 | 0.9375 | | 0.2909 | 5.0 | 20 | 0.1675 | 0.9453 | | 0.2909 | 6.0 | 24 | 0.1439 | 0.9531 | | 0.2909 | 7.0 | 28 | 0.1448 | 0.9297 | | 0.1492 | 8.0 | 32 | 0.1798 | 0.9531 | | 0.1492 | 9.0 | 36 | 0.1360 | 0.9453 | | 0.1161 | 10.0 | 40 | 0.1670 | 0.9531 | | 0.1161 | 11.0 | 44 | 0.1637 | 0.9531 | | 0.1161 | 12.0 | 48 | 0.1298 | 0.9531 | | 0.1053 | 13.0 | 52 | 0.1162 | 0.9531 | | 0.1053 | 14.0 | 56 | 0.1353 | 0.9531 | | 0.0839 | 15.0 | 60 | 0.1211 | 0.9609 | | 0.0839 | 16.0 | 64 | 0.1113 | 0.9609 | | 0.0839 | 17.0 | 68 | 0.1145 | 0.9609 | | 0.0689 | 18.0 | 72 | 0.1239 | 0.9531 | | 0.0689 | 19.0 | 76 | 0.1280 | 0.9531 | | 0.0581 | 20.0 | 80 | 0.1533 | 0.9531 | | 0.0581 | 21.0 | 84 | 0.1323 | 0.9609 | | 0.0581 | 22.0 | 88 | 0.1327 | 0.9531 | | 0.0545 | 23.0 | 92 | 0.1529 | 0.9531 | | 0.0545 | 24.0 | 96 | 0.1357 | 0.9531 | | 0.046 | 25.0 | 100 | 0.1333 | 0.9531 | | 0.046 | 26.0 | 104 | 0.1466 | 0.9531 | | 0.046 | 27.0 | 108 | 0.1300 | 0.9531 | | 0.0421 | 28.0 | 112 | 0.1077 | 0.9609 | | 0.0421 | 29.0 | 116 | 0.0985 | 0.9609 | | 0.0371 | 30.0 | 120 | 0.1186 | 0.9531 | | 0.0371 | 31.0 | 124 | 0.1123 | 0.9531 | | 0.0371 | 32.0 | 128 | 0.1144 | 0.9531 | | 0.0348 | 33.0 | 132 | 0.1276 | 0.9531 | | 0.0348 | 34.0 | 136 | 0.1488 | 0.9531 | | 0.0211 | 35.0 | 140 | 0.1560 | 0.9531 | | 0.0211 | 36.0 | 144 | 0.1477 | 0.9531 | | 0.0211 | 37.0 | 148 | 0.1488 | 0.9531 | | 0.0274 | 38.0 | 152 | 0.1467 | 0.9531 | | 0.0274 | 39.0 | 156 | 0.1401 | 0.9531 | | 0.0259 | 40.0 | 160 | 0.1379 | 0.9531 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
allstax/shorty-4-1
allstax
"2024-03-21T19:35:33Z"
49
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-21T19:32:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ngchuchi/hf_text_classification
ngchuchi
"2024-12-27T20:38:31Z"
117
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-27T15:37:15Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: hf_text_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hf_text_classification This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2311 - Accuracy: 0.9329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2413 | 1.0 | 1563 | 0.2057 | 0.9210 | | 0.1581 | 2.0 | 3126 | 0.2311 | 0.9329 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
ahmmu20/rpg_v4
ahmmu20
"2023-02-02T22:40:36Z"
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-02-02T22:23:00Z"
--- license: creativeml-openrail-m --- ## RGB_v4 This model has been created by [Anashel](https://civitai.com/user/Anashel). For more info, please check the model on [Civit](https://civitai.com/models/1116/rpg). If I'm breaking any rules by uploading this model to huggingface -- then please let me know, and I will gladly remove it. The reason I'm uploading the model here is that I need to use it in Colab.
OrionStarAI/Orion-14B-Chat
OrionStarAI
"2024-04-11T10:48:51Z"
5,013
64
transformers
[ "transformers", "pytorch", "gguf", "orion", "text-generation", "code", "model", "llm", "conversational", "custom_code", "en", "zh", "ja", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-16T06:03:30Z"
--- language: - en - zh - ja - ko metrics: - accuracy pipeline_tag: text-generation tags: - code - model - llm --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <img src="./assets/imgs/orion_start.PNG" alt="logo" width="50%" /> </div> <div align="center"> <h1> Orion-14B </h1> </div> <div align="center"> <div align="center"> <b>🌐English</b> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/README_zh.md" target="_blank">🇨🇳中文</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/README_ja.md" target="_blank">🇯🇵日本語</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/README_ko.md" target="_blank">🇰🇷한국어</a> </div> <h4 align="center"> <p> 🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a><br>😺 <a href="https://github.com/OrionStarAI/Orion" target="_blank">GitHub</a><br>📖 <a href="https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf" target="_blank">Tech Report</a> <p> </h4> </div> # Table of Contents - [📖 Model Introduction](#model-introduction) - [🔗 Model Download](#model-download) - [🔖 Model Benchmark](#model-benchmark) - [📊 Model Inference](#model-inference)[<img src="./assets/imgs/vllm_1.png" alt="vllm" style="margin: 0;display: initial;" height="20" />](#vllm) [<img src="./assets/imgs/llama_cpp_1.png" alt="llamacpp" style="margin: 0;display: initial;" height="20" />](#llama-cpp) - [📜 Declarations & License](#declarations-license) - [🥇 Company Introduction](#company-introduction) <a name="model-introduction"></a><br> # 1. Model Introduction - Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. The base model is trained on 2.5T multilingual corpus, including Chinese, English, Japanese, Korean, etc, and it exhibits superior performance in these languages. For details, please refer to [tech report](https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf). - The Orion-14B series models exhibit the following features: - Among models with 20B-parameter scale level, Orion-14B-Base model shows outstanding performance in comprehensive evaluations. - Strong multilingual capabilities, significantly outperforming in Japanese and Korean testsets. - The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests. - The long-chat version supports extremely long texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k. - The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%. <table style="border-collapse: collapse; width: 100%;"> <tr> <td style="border: none; padding: 10px; box-sizing: border-box;"> <img src="./assets/imgs/opencompass_en.png" alt="opencompass" style="width: 100%; height: auto;"> </td> <td style="border: none; padding: 10px; box-sizing: border-box;"> <img src="./assets/imgs/model_cap_en.png" alt="modelcap" style="width: 100%; height: auto;"> </td> </tr> </table> - Orion-14B series models including: - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens. - **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community. - **Orion-14B-LongChat:** The long-context version excels at handling extremely lengthy texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k. - **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks. - **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system. - **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%. - **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights. <a name="model-download"></a><br> # 2. Model Download Model release and download links are provided in the table below: | Model Name | HuggingFace Download Links | ModelScope Download Links | |-------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | ⚾Orion-14B-Base | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) | | 😛Orion-14B-Chat | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) | | 📃Orion-14B-LongChat | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) | | 🔎Orion-14B-Chat-RAG | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) | | 🔌Orion-14B-Chat-Plugin | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary) | | 💼Orion-14B-Base-Int4 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) | | 📦Orion-14B-Chat-Int4 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) | <a name="model-benchmark"></a><br> # 3. Model Benchmarks ## 3.1. Base Model Orion-14B-Base Benchmarks ### 3.1.1. LLM evaluation results on examination and professional knowledge | Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH | |--------------------|----------|----------|----------|----------|----------|----------| | LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 | | Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 | | Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 | | QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 | | InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 | | **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** | ### 3.1.2. LLM evaluation results on language understanding and common knowledge | Model |RACE-middle|RACE-high |HellaSwag | PIQA | Lambada | WSC | |--------------------|----------|----------|----------|----------|----------|----------| | LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 | | Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 | | Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 | | QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 | | InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 | | **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** | ### 3.1.3. LLM evaluation results of OpenCompass testsets | Model | Average | Examination | Language | Knowledge | Understanding | Reasoning | |------------------|----------|----------|----------|----------|----------|----------| | LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 | | Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 | | Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 | | QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 | | InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 | |**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** | ### 3.1.4. Comparison of LLM performances on Japanese testsets | Model |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM | |--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------| | PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 | | WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 | | ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 | | StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 | | LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 | | Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 | | QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 | | Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 | | **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** | ### 3.1.5. Comparison of LLM performances on Korean testsets. n = 0 and n = 5 stand for n-shot prompts used in the evaluation |Model | **Average**<br>n=0&nbsp;&nbsp;n=5 | HellaSwag<br>n=0&nbsp;&nbsp;n=5 | COPA<br> n=0&nbsp;&nbsp;n=5 | BooIQ<br>n=0&nbsp;&nbsp;n=5 | SentiNeg<br>n=0&nbsp;&nbsp;n=5| |------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------| | KoGPT | 53.0 &nbsp;&nbsp; 70.1 | 55.9 &nbsp;&nbsp; 58.3 | 73.5 &nbsp;&nbsp; 72.9 | 45.1 &nbsp;&nbsp; 59.8 | 37.5 &nbsp;&nbsp; 89.4 | | Polyglot-ko-13B | 69.6 &nbsp;&nbsp; 73.7 |**59.5** &nbsp;&nbsp; **63.1**|**79.4** &nbsp;&nbsp; **81.1**| 48.2 &nbsp;&nbsp; 60.4 | 91.2 &nbsp;&nbsp; 90.2 | | LLaMA 2-13B | 46.7 &nbsp;&nbsp; 63.7 | 41.3 &nbsp;&nbsp; 44.0 | 59.3 &nbsp;&nbsp; 63.8 | 34.9 &nbsp;&nbsp; 73.8 | 51.5 &nbsp;&nbsp; 73.4 | | Baichuan 2-13B | 52.1 &nbsp;&nbsp; 58.7 | 39.2 &nbsp;&nbsp; 39.6 | 60.6 &nbsp;&nbsp; 60.6 | 58.4 &nbsp;&nbsp; 61.5 | 50.3 &nbsp;&nbsp; 72.9 | | QWEN-14B | 53.8 &nbsp;&nbsp; 73.7 | 45.3 &nbsp;&nbsp; 46.8 | 64.9 &nbsp;&nbsp; 68.9 | 33.4 &nbsp;&nbsp; 83.5 | 71.5 &nbsp;&nbsp; 95.7 | | Yi-34B | 54.2 &nbsp;&nbsp; 72.1 | 44.6 &nbsp;&nbsp; 44.7 | 58.0 &nbsp;&nbsp; 60.6 | 65.9 &nbsp;&nbsp; 90.2 | 48.3 &nbsp;&nbsp; 92.9 | |**Orion-14B-Chat**|**74.5** &nbsp;&nbsp; **79.6**| 47.0 &nbsp;&nbsp; 49.6 | 77.7 &nbsp;&nbsp; 79.4 |**81.6** &nbsp;&nbsp; **90.7**|**92.4** &nbsp;&nbsp; **98.7**| ### 3.1.6. Multilingual evaluation | Model | Train Lang | Japanese | Korean | Chinese | English | |--------------------|------------|----------|----------|----------|----------| | PLaMo-13B | En,Jp | 52.3 | * | * | * | | Weblab-10B | En,Jp | 50.7 | * | * | * | | ELYZA-jp-7B | En,Jp | 48.8 | * | * | * | | StableLM-jp-7B | En,Jp | 51.1 | * | * | * | | KoGPT-6B | En,Ko | * | 70.1 | * | * | | Polyglot-ko-13B | En,Ko | * | 70.7 | * | * | | Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 | | Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 | | Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 | | Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** | | **Orion-14B-Chat** | Multi | **69.1** | **79.5** | **67.9** | 67.3 | ## 3.2. Chat Model Orion-14B-Chat Benchmarks ### 3.2.1. Chat model subjective evaluation of MTBench | Model | First-Turn | Second-Turn | **Average** | |----------------------|----------|----------|----------| | Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 | | Qwen-14B-Chat | 7.30 | 6.62 | 6.96 | | Llama2-13B-Chat | 7.10 | 6.20 | 6.65 | | InternLM-20B-Chat | 7.03 | 5.93 | 6.48 | | **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** | \* use vllm for inference ### 3.2.2. Chat model subjective evaluation of AlignBench | Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. |**Avg.**| |--------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 | | Qwen-14B-Chat |**4.91**|**4.71**|**6.90**| 6.36 | 6.74 | 6.64 | 6.59 | 6.56 |**5.72**| | Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 | | InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 |**7.18**| 6.19 | 6.49 | 6.22 | 4.96 | | **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 |**6.57**| 7.16 |**7.36**|**7.16**|**6.99**| 5.51 | \* use vllm for inference ## 3.3. LongChat Model Orion-14B-LongChat Benchmarks ### 3.3.1. LongChat evaluation of LongBench | Model | NarrativeQA|MultiFieldQA-en|MultiFieldQA-zh| DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT |RepoBench-P| |--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 | | LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 | | Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 | | Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** | | Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 | ## 3.4. Chat RAG Model Benchmarks ### 3.4.1. LLM evaluation results of self-built RAG testsets |Model|Effectiveness of Response(Keyword)|*Effectiveness of Response(subjective evaluation)|Quoting Ability|Fallback Ability|*AutoQA|*Data Extraction| |---------------------|------|------|------|------|------|------| | Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 | | Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 | | Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 | | GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 | | Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 | \* means manual assessment ## 3.5. Chat Plugin Model Orion-14B-Chat-Plugin Benchmarks ### 3.5.1. LLM evaluation results of self-built plugin testsets |Model |Intent Recognition with Full Params |Intent Recognition with Missing Params |Non-Plugin Invocation Recognition | |-----------------------|--------|-----------|--------| | Baichuan2-13B-Chat | 25 | 0 | 0 | | Qwen-14B-Chat | 55 | 0 | 50 | | GPT-4 | **95** | 52.38 | 70 | | Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** | ## 3.6. Quantized Model Orion-14B-Base-Int4 Benchmarks ### 3.6.1. Comparison of before and after quantization |Model |Size(GB)|Inference Speed(tokens/s)|C-Eval|CMMLU|MMLU|RACE|HellaSwag| |-------------------------|-------|-----|------|------|------|------|------| | OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 | | OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 | <a name="model-inference"></a><br> # 4. Model Inference Model weights, source code, and configuration needed for inference are published on Hugging Face, and the download link is available in the table at the beginning of this document. We demonstrate various inference methods here, and the program will automatically download the necessary resources from Hugging Face. ## 4.1. Python Code ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B") messages = [{"role": "user", "content": "Hello, what is your name? "}] response = model.chat(tokenizer, messages, streaming=False) print(response) ``` In the above Python code, the model is loaded with `device_map='auto'` to utilize all available GPUs. To specify the device, you can use something like `export CUDA_VISIBLE_DEVICES=0,1` (using GPUs 0 and 1). ## 4.2. Command Line Tool ```shell CUDA_VISIBLE_DEVICES=0 python cli_demo.py ``` This command-line tool is designed for chat scenarios, and thus, it does not support calling the base model. ## 4.3. Direct Script Inference ```shell # base model CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt hello # chat model CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt hi ``` <a name="vllm"></a><br> ## 4.4. Inference by vllm - Project URL<br> https://github.com/vllm-project/vllm - Pull Request<br> https://github.com/vllm-project/vllm/pull/2539 <a name="llama-cpp"></a><br> ## 4.5. Inference by llama.cpp - Project URL<br> https://github.com/ggerganov/llama.cpp - Pull Request<br> https://github.com/ggerganov/llama.cpp/pull/5118 - How to convert to GGUF model ```shell python convert-hf-to-gguf.py path/to/Orion-14B-Chat --outfile chat.gguf ``` - How to run generation ```shell ./main --frequency-penalty 0.5 --frequency-penalty 0.5 --top-k 5 --top-p 0.9 -m chat.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e ``` ## 4.6 Example Output ### 4.6.1. Casual Chat ````` User: Hello Orion-14B: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask. User: What functions do you have? Orion-14B: As an AI language model, I have a wide range of functions and capabilities, including but not limited to: Answering general knowledge questions: I can provide information on various topics such as history, science, geography, and more. Providing definitions and explanations: I can explain complex concepts and provide definitions for words and phrases. Assisting with tasks: I can help with simple tasks like setting reminders, converting units of measurement, and even ordering food from restaurants. Generating text: I can write short stories, essays, articles, and other types of content based on prompts given to me. Translating languages: I can translate text from one language to another. Chatting: I can engage in conversation with users and respond to their queries in a conversational manner. Summarizing long texts: I can summarize longer texts like articles, books, and research papers. Generating random numbers and names: I can generate random numbers within a specified range and also suggest names based on certain criteria. Playing games: I can play simple games like Hangman, Trivia, and more. Providing recommendations: I can recommend movies, music, books, and other forms of entertainment based on user preferences. These are just some of the many functions that I am capable of performing. If you have a specific task in mind, feel free to ask me about it. User: Tell me a joke. Orion-14B: Sure, here's a classic one-liner: Why don't scientists trust atoms? Because they make up everything. ````` ### 4.6.2. Japanese & Korean Chat ````` User:自己を紹介してください Orion-14B:こんにちは、私の名前はChatMaxで、OrionStarによって開発されたAIアシスタントです。私はユーザーに正確で、客観的で、有用な回答を提供し、彼らが問題を解決し、情報を取得するのを助けることができます。 User:자기소개를 해주세요. Orion-14B:안녕하세요, 제 이름은 ChatMax입니다. OrionStar에서 개발한 AI 어시스턴트입니다. 저는 사용자들에게 정확하고 객관적이며 유용한 답변을 제공하여 문제를 해결하고 정보를 얻는 데 도움을 줄 수 있습니다. ````` <a name="declarations-license"></a><br> # 5. Declarations, License ## 5.1. Declarations We strongly urge all users not to use the Orion-14B model for any activities that may harm national or social security or violate the law. Additionally, we request users not to use the Orion-14B model for internet services without proper security review and filing. We hope all users abide by this principle to ensure that technological development takes place in a regulated and legal environment. We have done our best to ensure the compliance of the data used in the model training process. However, despite our significant efforts, unforeseen issues may still arise due to the complexity of the model and data. Therefore, if any problems arise due to the use of the Orion-14B open-source model, including but not limited to data security issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility. ## 5.2. License Community use of the Orion-14B series models - For code, please comply with [Apache License Version 2.0](./LICENSE)<br> - For model, please comply with [【Orion-14B Series】 Models Community License Agreement](./ModelsCommunityLicenseAgreement) <a name="company-introduction"></a><br> # 6. Company Introduction OrionStar is a leading global service robot solutions company, founded in September 2016. OrionStar is dedicated to using artificial intelligence technology to create the next generation of revolutionary robots, allowing people to break free from repetitive physical labor and making human work and life more intelligent and enjoyable. Through technology, OrionStar aims to make society and the world a better place. OrionStar possesses fully self-developed end-to-end artificial intelligence technologies, such as voice interaction and visual navigation. It integrates product development capabilities and technological application capabilities. Based on the Orion robotic arm platform, it has launched products such as OrionStar AI Robot Greeting, AI Robot Greeting Mini, Lucki, Coffee Master, and established the open platform OrionOS for Orion robots. Following the philosophy of "Born for Truly Useful Robots", OrionStar empowers more people through AI technology. **The core strengths of OrionStar lies in possessing end-to-end AI application capabilities,** including big data preprocessing, large model pretraining, fine-tuning, prompt engineering, agent, etc. With comprehensive end-to-end model training capabilities, including systematic data processing workflows and the parallel model training capability of hundreds of GPUs, it has been successfully applied in various industry scenarios such as government affairs, cloud services, international e-commerce, and fast-moving consumer goods. Companies with demands for deploying large-scale model applications are welcome to contact us.<br> **Enquiry Hotline: 400-898-7779**<br> **E-mail: [email protected]**<br> **Discord Link: https://discord.gg/zumjDWgdAs** <div align="center"> <img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" /> </div>
gntmky/mm3dtest
gntmky
"2024-06-05T17:27:47Z"
0
0
null
[ "arxiv:2301.06051", "arxiv:2307.14620", "arxiv:2207.12691", "arxiv:2302.07817", "region:us" ]
null
"2024-06-04T09:54:27Z"
<div align="center"> <img src="resources/mmdet3d-logo.png" width="600"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">OpenMMLab website</font></b> <sup> <a href="https://openmmlab.com"> <i><font size="4">HOT</font></i> </a> </sup> &nbsp;&nbsp;&nbsp;&nbsp; <b><font size="5">OpenMMLab platform</font></b> <sup> <a href="https://platform.openmmlab.com"> <i><font size="4">TRY IT OUT</font></i> </a> </sup> </div> <div>&nbsp;</div> [![PyPI](https://img.shields.io/pypi/v/mmdet3d)](https://pypi.org/project/mmdet3d) [![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdetection3d.readthedocs.io/en/latest/) [![badge](https://github.com/open-mmlab/mmdetection3d/workflows/build/badge.svg)](https://github.com/open-mmlab/mmdetection3d/actions) [![codecov](https://codecov.io/gh/open-mmlab/mmdetection3d/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmdetection3d) [![license](https://img.shields.io/github/license/open-mmlab/mmdetection3d.svg)](https://github.com/open-mmlab/mmdetection3d/blob/main/LICENSE) [![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmdetection3d.svg)](https://github.com/open-mmlab/mmdetection3d/issues) [![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmdetection3d.svg)](https://github.com/open-mmlab/mmdetection3d/issues) [📘Documentation](https://mmdetection3d.readthedocs.io/en/latest/) | [🛠️Installation](https://mmdetection3d.readthedocs.io/en/latest/get_started.html) | [👀Model Zoo](https://mmdetection3d.readthedocs.io/en/latest/model_zoo.html) | [🆕Update News](https://mmdetection3d.readthedocs.io/en/latest/notes/changelog.html) | [🚀Ongoing Projects](https://github.com/open-mmlab/mmdetection3d/projects) | [🤔Reporting Issues](https://github.com/open-mmlab/mmdetection3d/issues/new/choose) </div> <div align="center"> English | [简体中文](README_zh-CN.md) </div> <div align="center"> <a href="https://openmmlab.medium.com/" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png" width="3%" alt="" /></a> <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" /> <a href="https://discord.com/channels/1037617289144569886/1046608014234370059" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a> <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" /> <a href="https://twitter.com/OpenMMLab" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png" width="3%" alt="" /></a> <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" /> <a href="https://www.youtube.com/openmmlab" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a> <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" /> <a href="https://space.bilibili.com/1293512903" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png" width="3%" alt="" /></a> <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" /> <a href="https://www.zhihu.com/people/openmmlab" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png" width="3%" alt="" /></a> </div> ## Introduction MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the [OpenMMLab](https://openmmlab.com/) project. The main branch works with **PyTorch 1.8+**. ![demo image](resources/mmdet3d_outdoor_demo.gif) <details open> <summary>Major features</summary> - **Support multi-modality/single-modality detectors out of box** It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc. - **Support indoor/outdoor 3D detection out of box** It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support [nuImages dataset](https://github.com/open-mmlab/mmdetection3d/tree/main/configs/nuimages). - **Natural integration with 2D detection** All the about **300+ models, methods of 40+ papers**, and modules supported in [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase. - **High efficiency** It trains faster than other codebases. The main results are as below. Details can be found in [benchmark.md](./docs/en/notes/benchmarks.md). We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by `✗`. | Methods | MMDetection3D | [OpenPCDet](https://github.com/open-mmlab/OpenPCDet) | [votenet](https://github.com/facebookresearch/votenet) | [Det3D](https://github.com/poodarchu/Det3D) | | :-----------------: | :-----------: | :--------------------------------------------------: | :----------------------------------------------------: | :-----------------------------------------: | | VoteNet | 358 | ✗ | 77 | ✗ | | PointPillars-car | 141 | ✗ | ✗ | 140 | | PointPillars-3class | 107 | 44 | ✗ | ✗ | | SECOND | 40 | 30 | ✗ | ✗ | | Part-A2 | 17 | 14 | ✗ | ✗ | </details> Like [MMDetection](https://github.com/open-mmlab/mmdetection) and [MMCV](https://github.com/open-mmlab/mmcv), MMDetection3D can also be used as a library to support different projects on top of it. ## What's New ### Highlight In version 1.4, MMDetecion3D refactors the Waymo dataset and accelerates the preprocessing, training/testing setup, and evaluation of Waymo dataset. We also extends the support for camera-based, such as Monocular and BEV, 3D object detection models on Waymo. A detailed description of the Waymo data information is provided [here](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/waymo.html). Besides, in version 1.4, MMDetection3D provides [Waymo-mini](https://download.openmmlab.com/mmdetection3d/data/waymo_mmdet3d_after_1x4/waymo_mini.tar.gz) to help community users get started with Waymo and use it for quick iterative development. **v1.4.0** was released in 8/1/2024: - Support the training of [DSVT](<(https://arxiv.org/abs/2301.06051)>) in `projects` - Support [Nerf-Det](https://arxiv.org/abs/2307.14620) in `projects` - Refactor Waymo dataset **v1.3.0** was released in 18/10/2023: - Support [CENet](https://arxiv.org/abs/2207.12691) in `projects` - Enhance demos with new 3D inferencers **v1.2.0** was released in 4/7/2023 - Support [New Config Type](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta) in `mmdet3d/configs` - Support the inference of [DSVT](<(https://arxiv.org/abs/2301.06051)>) in `projects` - Support downloading datasets from [OpenDataLab](https://opendatalab.com/) using `mim` **v1.1.1** was released in 30/5/2023: - Support [TPVFormer](https://arxiv.org/pdf/2302.07817.pdf) in `projects` - Support the training of BEVFusion in `projects` - Support lidar-based 3D semantic segmentation benchmark ## Installation Please refer to [Installation](https://mmdetection3d.readthedocs.io/en/latest/get_started.html) for installation instructions. ## Getting Started For detailed user guides and advanced guides, please refer to our [documentation](https://mmdetection3d.readthedocs.io/en/latest/): <details> <summary>User Guides</summary> - [Train & Test](https://mmdetection3d.readthedocs.io/en/latest/user_guides/index.html#train-test) - [Learn about Configs](https://mmdetection3d.readthedocs.io/en/latest/user_guides/config.html) - [Coordinate System](https://mmdetection3d.readthedocs.io/en/latest/user_guides/coord_sys_tutorial.html) - [Dataset Preparation](https://mmdetection3d.readthedocs.io/en/latest/user_guides/dataset_prepare.html) - [Customize Data Pipelines](https://mmdetection3d.readthedocs.io/en/latest/user_guides/data_pipeline.html) - [Test and Train on Standard Datasets](https://mmdetection3d.readthedocs.io/en/latest/user_guides/train_test.html) - [Inference](https://mmdetection3d.readthedocs.io/en/latest/user_guides/inference.html) - [Train with Customized Datasets](https://mmdetection3d.readthedocs.io/en/latest/user_guides/new_data_model.html) - [Useful Tools](https://mmdetection3d.readthedocs.io/en/latest/user_guides/index.html#useful-tools) </details> <details> <summary>Advanced Guides</summary> - [Datasets](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/index.html#datasets) - [KITTI Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/kitti.html) - [NuScenes Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/nuscenes.html) - [Lyft Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/lyft.html) - [Waymo Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/waymo.html) - [SUN RGB-D Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/sunrgbd.html) - [ScanNet Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/scannet.html) - [S3DIS Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/s3dis.html) - [SemanticKITTI Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/semantickitti.html) - [Supported Tasks](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/index.html#supported-tasks) - [LiDAR-Based 3D Detection](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/supported_tasks/lidar_det3d.html) - [Vision-Based 3D Detection](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/supported_tasks/vision_det3d.html) - [LiDAR-Based 3D Semantic Segmentation](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/supported_tasks/lidar_sem_seg3d.html) - [Customization](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/index.html#customization) - [Customize Datasets](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/customize_dataset.html) - [Customize Models](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/customize_models.html) - [Customize Runtime Settings](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/customize_runtime.html) </details> ## Overview of Benchmark and Model Zoo Results and models are available in the [model zoo](docs/en/model_zoo.md). <div align="center"> <b>Components</b> </div> <table align="center"> <tbody> <tr align="center" valign="bottom"> <td> <b>Backbones</b> </td> <td> <b>Heads</b> </td> <td> <b>Features</b> </td> </tr> <tr valign="top"> <td> <ul> <li><a href="configs/pointnet2">PointNet (CVPR'2017)</a></li> <li><a href="configs/pointnet2">PointNet++ (NeurIPS'2017)</a></li> <li><a href="configs/regnet">RegNet (CVPR'2020)</a></li> <li><a href="configs/dgcnn">DGCNN (TOG'2019)</a></li> <li>DLA (CVPR'2018)</li> <li>MinkResNet (CVPR'2019)</li> <li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li> <li><a href="configs/cylinder3d">Cylinder3D (CVPR'2021)</a></li> </ul> </td> <td> <ul> <li><a href="configs/free_anchor">FreeAnchor (NeurIPS'2019)</a></li> </ul> </td> <td> <ul> <li><a href="configs/dynamic_voxelization">Dynamic Voxelization (CoRL'2019)</a></li> </ul> </td> </tr> </td> </tr> </tbody> </table> <div align="center"> <b>Architectures</b> </div> <table align="center"> <tbody> <tr align="center" valign="middle"> <td> <b>LiDAR-based 3D Object Detection</b> </td> <td> <b>Camera-based 3D Object Detection</b> </td> <td> <b>Multi-modal 3D Object Detection</b> </td> <td> <b>3D Semantic Segmentation</b> </td> </tr> <tr valign="top"> <td> <li><b>Outdoor</b></li> <ul> <li><a href="configs/second">SECOND (Sensor'2018)</a></li> <li><a href="configs/pointpillars">PointPillars (CVPR'2019)</a></li> <li><a href="configs/ssn">SSN (ECCV'2020)</a></li> <li><a href="configs/3dssd">3DSSD (CVPR'2020)</a></li> <li><a href="configs/sassd">SA-SSD (CVPR'2020)</a></li> <li><a href="configs/point_rcnn">PointRCNN (CVPR'2019)</a></li> <li><a href="configs/parta2">Part-A2 (TPAMI'2020)</a></li> <li><a href="configs/centerpoint">CenterPoint (CVPR'2021)</a></li> <li><a href="configs/pv_rcnn">PV-RCNN (CVPR'2020)</a></li> <li><a href="projects/CenterFormer">CenterFormer (ECCV'2022)</a></li> </ul> <li><b>Indoor</b></li> <ul> <li><a href="configs/votenet">VoteNet (ICCV'2019)</a></li> <li><a href="configs/h3dnet">H3DNet (ECCV'2020)</a></li> <li><a href="configs/groupfree3d">Group-Free-3D (ICCV'2021)</a></li> <li><a href="configs/fcaf3d">FCAF3D (ECCV'2022)</a></li> <li><a href="projects/TR3D">TR3D (ArXiv'2023)</a></li> </ul> </td> <td> <li><b>Outdoor</b></li> <ul> <li><a href="configs/imvoxelnet">ImVoxelNet (WACV'2022)</a></li> <li><a href="configs/smoke">SMOKE (CVPRW'2020)</a></li> <li><a href="configs/fcos3d">FCOS3D (ICCVW'2021)</a></li> <li><a href="configs/pgd">PGD (CoRL'2021)</a></li> <li><a href="configs/monoflex">MonoFlex (CVPR'2021)</a></li> <li><a href="projects/DETR3D">DETR3D (CoRL'2021)</a></li> <li><a href="projects/PETR">PETR (ECCV'2022)</a></li> </ul> <li><b>Indoor</b></li> <ul> <li><a href="configs/imvoxelnet">ImVoxelNet (WACV'2022)</a></li> </ul> </td> <td> <li><b>Outdoor</b></li> <ul> <li><a href="configs/mvxnet">MVXNet (ICRA'2019)</a></li> <li><a href="projects/BEVFusion">BEVFusion (ICRA'2023)</a></li> </ul> <li><b>Indoor</b></li> <ul> <li><a href="configs/imvotenet">ImVoteNet (CVPR'2020)</a></li> </ul> </td> <td> <li><b>Outdoor</b></li> <ul> <li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li> <li><a href="configs/spvcnn">SPVCNN (ECCV'2020)</a></li> <li><a href="configs/cylinder3d">Cylinder3D (CVPR'2021)</a></li> <li><a href="projects/TPVFormer">TPVFormer (CVPR'2023)</a></li> </ul> <li><b>Indoor</b></li> <ul> <li><a href="configs/pointnet2">PointNet++ (NeurIPS'2017)</a></li> <li><a href="configs/paconv">PAConv (CVPR'2021)</a></li> <li><a href="configs/dgcnn">DGCNN (TOG'2019)</a></li> </ul> </ul> </td> </tr> </td> </tr> </tbody> </table> | | ResNet | VoVNet | Swin-T | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | Cylinder3D | MinkUNet | | :-----------: | :----: | :----: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | :--------: | :------: | | SECOND | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | PointPillars | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | | FreeAnchor | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | | VoteNet | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | H3DNet | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | 3DSSD | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | Part-A2 | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | MVXNet | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | CenterPoint | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | SSN | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | | ImVoteNet | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | FCOS3D | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | PointNet++ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | Group-Free-3D | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | ImVoxelNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | PAConv | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | DGCNN | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | SMOKE | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | | PGD | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | | SA-SSD | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | FCAF3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | | PV-RCNN | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | Cylinder3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | | MinkUNet | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | | SPVCNN | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | | BEVFusion | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | CenterFormer | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | TR3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | | DETR3D | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | PETR | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | TPVFormer | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | **Note:** All the about **500+ models, methods of 90+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase. ## FAQ Please refer to [FAQ](docs/en/notes/faq.md) for frequently asked questions. ## Contributing We appreciate all contributions to improve MMDetection3D. Please refer to [CONTRIBUTING.md](docs/en/notes/contribution_guides.md) for the contributing guideline. ## Acknowledgement MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors. ## Citation If you find this project useful in your research, please consider cite: ```latex @misc{mmdet3d2020, title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection}, author={MMDetection3D Contributors}, howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}}, year={2020} } ``` ## License This project is released under the [Apache 2.0 license](LICENSE). ## Projects in OpenMMLab - [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models. - [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision. - [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries. - [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages. - [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab pre-training toolbox and benchmark. - [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark. - [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection. - [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark. - [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark. - [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark. - [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox. - [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark. - [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark. - [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark. - [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark. - [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark. - [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark. - [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark. - [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark. - [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox. - [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox. - [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
hosst/FridgeLLM
hosst
"2023-05-26T16:15:55Z"
0
1
adapter-transformers
[ "adapter-transformers", "technicians", "appliances", "manufacturers", "troubleshooting", "conversational", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "region:us" ]
text-generation
"2023-05-26T15:44:41Z"
--- license: apache-2.0 library_name: adapter-transformers pipeline_tag: conversational tags: - technicians - appliances - manufacturers - troubleshooting datasets: - OpenAssistant/oasst1 ---
freddiebevis/annie_winstanley_lora
freddiebevis
"2024-12-24T07:05:45Z"
5
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2024-12-24T06:19:53Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ANNIEW --- # Annie_Winstanley_Lora <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ANNIEW` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('freddiebevis/annie_winstanley_lora', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
mradermacher/natsumura-llama3.1-base-8b-GGUF
mradermacher
"2024-07-28T06:54:28Z"
6
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-07-27T23:39:57Z"
--- base_model: tohur/natsumura-llama3.1-base-8b language: - en library_name: transformers license: llama3.1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/tohur/natsumura-llama3.1-base-8b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/natsumura-llama3.1-base-8b-GGUF/resolve/main/natsumura-llama3.1-base-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Casper0508/Casper_falcon_7b
Casper0508
"2024-03-05T22:58:04Z"
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:adapter:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
"2024-03-05T02:39:00Z"
--- base_model: ybelkada/falcon-7b-sharded-bf16 tags: - generated_from_trainer model-index: - name: Casper_falcon_7b results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Casper_falcon_7b This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 - load_in_4bit: True - load_in_8bit: False ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 200 ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.13.1 - Tokenizers 0.15.2
course-genai-w24/week4-phi-1.5-sft-shakespeare
course-genai-w24
"2024-11-04T13:54:34Z"
5
0
null
[ "safetensors", "phi", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:mit", "region:us" ]
null
"2024-11-01T15:27:41Z"
--- license: mit base_model: - microsoft/phi-1_5 --- # Model Card for Model ID ### Summary <!-- Provide a quick summary of what the model is/does. --> This is a supervised fine-tuned model for text completion based on Phi 1.5. It has been finetuned on a filtered version of the The Complete Works of William Shakespeare, which can be found and downloaded from here: [https://www.gutenberg.org/ebooks/100](https://www.gutenberg.org/ebooks/100). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Course Organizers - **Finetuned from model:** microsoft/phi-1_5 ### Training Details This model has been trained using the TLR library and SFTTrainer class from Huggingface. ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The Complete Works of William Shakespeare, which can be found and downloaded from here: [https://www.gutenberg.org/ebooks/100](https://www.gutenberg.org/ebooks/100) #### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - per_device_train_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 1 - optimizer: Adam with betas=(0.9, 0.95) - lr_scheduler_type: linear - weight_decay: 0.1 - num_epochs: 1 ### Framework Versions - accelerate==0.26.1 - datasets==2.16.1 - transformers==4.45.2 - trl==0.11.2 ### Compute Infrastructure and Hardware Slurm cluster with 8 x H100 Nvidia GPUs.
mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF
mradermacher
"2024-12-25T09:12:14Z"
39
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "ChaoticNeutrals/Kool-Aid_7B", "en", "base_model:Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "base_model:quantized:Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "endpoints_compatible", "region:us" ]
null
"2024-12-25T08:47:32Z"
--- base_model: Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp - ChaoticNeutrals/Kool-Aid_7B --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp-GGUF/resolve/main/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
goothod/Mistral-Small-24B-Instruct-2501-Q4_K_M-GGUF
goothod
"2025-02-06T20:43:31Z"
27
0
vllm
[ "vllm", "gguf", "transformers", "llama-cpp", "gguf-my-repo", "en", "fr", "de", "es", "it", "pt", "zh", "ja", "ru", "ko", "base_model:uncensoredai/Mistral-Small-24B-Instruct-2501", "base_model:quantized:uncensoredai/Mistral-Small-24B-Instruct-2501", "license:apache-2.0", "region:us", "conversational" ]
null
"2025-02-06T20:42:24Z"
--- language: - en - fr - de - es - it - pt - zh - ja - ru - ko license: apache-2.0 library_name: vllm inference: false base_model: uncensoredai/Mistral-Small-24B-Instruct-2501 extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. tags: - transformers - llama-cpp - gguf-my-repo --- # goothod/Mistral-Small-24B-Instruct-2501-Q4_K_M-GGUF This model was converted to GGUF format from [`uncensoredai/Mistral-Small-24B-Instruct-2501`](https://huggingface.co/uncensoredai/Mistral-Small-24B-Instruct-2501) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/uncensoredai/Mistral-Small-24B-Instruct-2501) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo goothod/Mistral-Small-24B-Instruct-2501-Q4_K_M-GGUF --hf-file mistral-small-24b-instruct-2501-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo goothod/Mistral-Small-24B-Instruct-2501-Q4_K_M-GGUF --hf-file mistral-small-24b-instruct-2501-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo goothod/Mistral-Small-24B-Instruct-2501-Q4_K_M-GGUF --hf-file mistral-small-24b-instruct-2501-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo goothod/Mistral-Small-24B-Instruct-2501-Q4_K_M-GGUF --hf-file mistral-small-24b-instruct-2501-q4_k_m.gguf -c 2048 ```
Dmytroo/Krimkramp
Dmytroo
"2023-02-24T12:15:43Z"
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
"2023-02-24T12:15:43Z"
--- license: bigscience-openrail-m ---
nhunglaaaaaaa/9817df33-8151-4e2c-874a-6f2db8d31125
nhunglaaaaaaa
"2025-01-24T22:32:45Z"
8
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-24T22:27:07Z"
--- library_name: peft license: other base_model: facebook/opt-350m tags: - axolotl - generated_from_trainer model-index: - name: 9817df33-8151-4e2c-874a-6f2db8d31125 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-350m bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 1561c4e5934d6040_train_data.json ds_type: json format: custom path: /workspace/input_data/1561c4e5934d6040_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhunglaaaaaaa/9817df33-8151-4e2c-874a-6f2db8d31125 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/1561c4e5934d6040_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2e3e9c07-aa87-4bf9-8f6c-649366b77a1f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 2e3e9c07-aa87-4bf9-8f6c-649366b77a1f warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 9817df33-8151-4e2c-874a-6f2db8d31125 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.625 | 0.0644 | 200 | 2.4239 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nagupv/Stable13B_contextLLMExam_18kv2_f1
nagupv
"2023-08-20T08:53:12Z"
3
0
peft
[ "peft", "region:us" ]
null
"2023-08-20T08:52:27Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
YaserSabriFMD/Jj
YaserSabriFMD
"2025-03-08T20:21:52Z"
0
0
null
[ "base_model:deepseek-ai/DeepSeek-R1", "base_model:finetune:deepseek-ai/DeepSeek-R1", "region:us" ]
null
"2025-03-08T20:20:43Z"
--- base_model: - deepseek-ai/DeepSeek-R1 ---
HachiML/Mists-7B-v01-simpleQA
HachiML
"2024-07-07T04:01:26Z"
7
0
transformers
[ "transformers", "safetensors", "mists", "feature-extraction", "trl", "sft", "generated_from_trainer", "custom_code", "base_model:HachiML/Mists-7B-v01-simple-projector-trained", "base_model:finetune:HachiML/Mists-7B-v01-simple-projector-trained", "license:apache-2.0", "region:us" ]
feature-extraction
"2024-07-07T03:48:33Z"
--- base_model: HachiML/Mists-7B-v01-simple-projector-trained license: apache-2.0 tags: - trl - sft - generated_from_trainer model-index: - name: Mists-7B-v01-simpleQA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/siseikatu8/huggingface/runs/ttzci7jp) # Mists-7B-v01-simpleQA This model is a fine-tuned version of [HachiML/Mists-7B-v01-simple-projector-trained](https://huggingface.co/HachiML/Mists-7B-v01-simple-projector-trained) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0972 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1811 | 0.0888 | 50 | 5.1410 | | 1.2662 | 0.1776 | 100 | 4.0209 | | 0.5676 | 0.2664 | 150 | 3.6247 | | 0.6122 | 0.3552 | 200 | 2.5723 | | 0.3855 | 0.4440 | 250 | 1.8618 | | 0.5471 | 0.5329 | 300 | 1.5969 | | 0.5238 | 0.6217 | 350 | 0.9045 | | 0.3799 | 0.7105 | 400 | 0.5475 | | 0.2406 | 0.7993 | 450 | 0.1614 | | 0.1381 | 0.8881 | 500 | 0.1055 | | 0.0992 | 0.9769 | 550 | 0.0972 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.0.1 - Datasets 2.20.0 - Tokenizers 0.19.1
ebobo/3a76eb4b-e6c2-4c47-a06e-c1c42e82b9f2
ebobo
"2024-12-03T15:23:24Z"
13
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-7b-it", "base_model:adapter:unsloth/gemma-7b-it", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
"2024-12-03T14:43:40Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: 3a76eb4b-e6c2-4c47-a06e-c1c42e82b9f2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-7b-it bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8698912b946b92ed_train_data.json ds_type: json format: custom path: /workspace/input_data/8698912b946b92ed_train_data.json type: field_input: context field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: ebobo/3a76eb4b-e6c2-4c47-a06e-c1c42e82b9f2 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/8698912b946b92ed_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 4056 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3a76eb4b-e6c2-4c47-a06e-c1c42e82b9f2 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3a76eb4b-e6c2-4c47-a06e-c1c42e82b9f2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3a76eb4b-e6c2-4c47-a06e-c1c42e82b9f2 This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.0027 | 0.0006 | 1 | 2.7413 | | 3.2161 | 0.0017 | 3 | 2.6312 | | 2.3622 | 0.0034 | 6 | 2.1885 | | 1.862 | 0.0051 | 9 | 2.2349 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
tancredimatteo/FT-distilbert-base-uncased
tancredimatteo
"2024-05-18T15:41:41Z"
121
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-18T15:27:49Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: FT-distilbert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FT-distilbert-base-uncased This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5957 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6820 | 0.575 | | No log | 2.0 | 80 | 0.6354 | 0.725 | | No log | 3.0 | 120 | 0.5957 | 0.7 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
trangtrannnnn/2fec20c0-23a9-4016-bc6d-c130e6008cd5
trangtrannnnn
"2025-01-25T00:10:21Z"
8
0
peft
[ "peft", "safetensors", "gemma2", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2-9b-it", "base_model:adapter:unsloth/gemma-2-9b-it", "license:gemma", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-24T23:46:48Z"
--- library_name: peft license: gemma base_model: unsloth/gemma-2-9b-it tags: - axolotl - generated_from_trainer model-index: - name: 2fec20c0-23a9-4016-bc6d-c130e6008cd5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2-9b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 1085f375edf746fa_train_data.json ds_type: json format: custom path: /workspace/input_data/1085f375edf746fa_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: trangtrannnnn/2fec20c0-23a9-4016-bc6d-c130e6008cd5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/1085f375edf746fa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e5176faf-069b-429f-bcf6-9ec7d6901b6b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e5176faf-069b-429f-bcf6-9ec7d6901b6b warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 2fec20c0-23a9-4016-bc6d-c130e6008cd5 This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2718 | 0.2479 | 200 | 0.2966 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nstrn-mo/bert-finetuned-arcchialogy-ner-default
nstrn-mo
"2024-11-05T11:32:52Z"
107
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-11-05T11:31:09Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-arcchialogy-ner-default results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-arcchialogy-ner-default This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/Dria-Agent-a-3B-Q2_K-GGUF
roleplaiapp
"2025-01-19T07:41:47Z"
30
0
transformers
[ "transformers", "gguf", "llama-cpp", "Dria-Agent-a-3B", "Q2_K", "3B", "2-bit", "Dria-Agent", "driaforall", "code", "math", "chat", "roleplay", "text-generation", "safetensors", "nlp", "en", "base_model:Qwen/Qwen2.5-Coder-3B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-01-19T07:41:43Z"
--- license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE language: - en base_model: - Qwen/Qwen2.5-Coder-3B-Instruct pipeline_tag: text-generation library_name: transformers tags: - llama-cpp - Dria-Agent-a-3B - gguf - Q2_K - 3B - 2-bit - Dria-Agent - llama-cpp - driaforall - code - math - chat - roleplay - text-generation - safetensors - nlp - code --- # roleplaiapp/Dria-Agent-a-3B-Q2_K-GGUF **Repo:** `roleplaiapp/Dria-Agent-a-3B-Q2_K-GGUF` **Original Model:** `Dria-Agent-a-3B` **Organization:** `driaforall` **Quantized File:** `dria-agent-a-3b-q2_k.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q2_K` **Use Imatrix:** `False` **Split Model:** `False` ## Overview This is an GGUF Q2_K quantized version of [Dria-Agent-a-3B](https://huggingface.co/driaforall/Dria-Agent-a-3B). ## Quantization By I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/)
phongtintruong/meomeo-2131048-450
phongtintruong
"2025-02-13T03:49:21Z"
0
0
transformers
[ "transformers", "safetensors", "meomeo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-02-13T03:48:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yazidtagnaouti/maes
yazidtagnaouti
"2024-02-14T14:45:39Z"
48
0
setfit
[ "setfit", "safetensors", "camembert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:Sahajtomar/french_semantic", "base_model:finetune:Sahajtomar/french_semantic", "model-index", "region:us" ]
text-classification
"2024-02-13T16:01:20Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: Quelle est la durée typique d'un prêt auto chez la banque CDM? - text: Y a-t-il des services d'assistance supplémentaires inclus dans l'assurance décès et invalidité, tels que des conseils juridiques ou financiers en cas de besoin? - text: Y a-t-il des restrictions quant au montant maximum couvert par l'assurance des moyens de paiement ? - text: Est-il possible de réaliser une simulation de crédit pour différents montants et durées de prêt chez la banque CDM? - text: Quels sont les avantages liés à l'utilisation d'une carte de crédit plutôt qu'une carte de débit? pipeline_tag: text-classification inference: true base_model: Sahajtomar/french_semantic model-index: - name: SetFit with Sahajtomar/french_semantic results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9666666666666667 name: Accuracy --- # SetFit with Sahajtomar/french_semantic This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [Sahajtomar/french_semantic](https://huggingface.co/Sahajtomar/french_semantic) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [Sahajtomar/french_semantic](https://huggingface.co/Sahajtomar/french_semantic) - **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance - **Maximum Sequence Length:** 514 tokens - **Number of Classes:** 18 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 14 | <ul><li>'Salut, tout le monde va bien ?'</li><li>'Salut, ça va bien ?'</li><li>'Bonne journée!'</li></ul> | | 11 | <ul><li>"Quelle est la politique de la banque CDM en cas de défaut de paiement ou de retard dans les contributions à un plan d'épargne éducation?"</li><li>"Est-il possible de nommer un bénéficiaire supplémentaire ou de modifier le bénéficiaire désigné pour un plan d'épargne éducation chez la banque CDM?"</li><li>"Quels sont les différents niveaux de risque associés aux produits d'épargne éducation proposés par la banque CDM?"</li></ul> | | 1 | <ul><li>"Y a-t-il des conditions spécifiques à respecter pour bénéficier des avantages de l'assurance épargne?"</li><li>"Quels sont les avantages fiscaux associés à la souscription d'une assurance épargne chez CDM?"</li><li>"Est-ce que l'assurance épargne chez CDM offre des options de transfert automatique de fonds depuis mon compte courant ou d'autres comptes?"</li></ul> | | 8 | <ul><li>'Un prêt à la consommation serait idéal pour couvrir les frais médicaux non prévus.'</li><li>"Je recherche des options de financement pour un voyage et je m'intéresse à un prêt à la consommation."</li><li>'quels sont les avantages du credit a la consommation'</li></ul> | | 0 | <ul><li>'Quels sont les coûts associés à cette assurance, et comment sont-ils facturés?'</li><li>"Quels documents ou informations supplémentaires sont nécessaires lors de la souscription à l'assurance décès et invalidité chez la banque CDM?"</li><li>'Proposez-vous des assurances décès et invalidité chez CDM?'</li></ul> | | 16 | <ul><li>"Quels types de protections en cas d'accident proposez-vous?"</li><li>"Puis-je transférer ma protection en cas d'accident si je change de véhicule ou de situation?"</li><li>"Quels sont les documents nécessaires en cas de déclaration d'accident pour bénéficier de la couverture?"</li></ul> | | 2 | <ul><li>"Quels types de pertes ou de dommages sont couverts par l'assurance des moyens de paiement?"</li><li>"Existe-t-il des options de couverture supplémentaires que je peux ajouter à mon contrat d'assurance des moyens de paiement ?"</li><li>"Quelle est la période de couverture de l'assurance des moyens de paiement?"</li></ul> | | 13 | <ul><li>"C'était utile, merci!"</li><li>"C'était informatif, au revoir!"</li><li>'BY'</li></ul> | | 12 | <ul><li>'Est-ce que la banque CDM offre des services de conseil financier pour aider les clients à planifier leur retraite?'</li><li>'Y a-t-il des incitations spéciales pour les jeunes travailleurs ou les personnes qui commencent à épargner tôt pour leur retraite?'</li><li>"Quelles sont les conséquences fiscales si je décide de transférer mon plan d'épargne retraite vers une autre institution financière?"</li></ul> | | 7 | <ul><li>"Y a-t-il des frais supplémentaires associés à l'obtention d'un crédit auto, comme les frais de traitement du dossier?"</li><li>"Est-ce que CDM propose des prêts auto pour les véhicules d'occasion ?"</li><li>"Quelle est la durée typique d'un prêt automobile chez CDM?"</li></ul> | | 4 | <ul><li>'Quels services puis-je effectuer en ligne, tels que le virement de fonds ou le paiement de factures?'</li><li>'Quelle est la politique de la banque CDM en matière de protection de la vie privée pour les services bancaires en ligne?'</li><li>"Comment puis-je m'inscrire pour les services de banque en ligne de CDM?"</li></ul> | | 5 | <ul><li>'Quelle est la politique de remboursement en cas de fraude sur ma carte bancaire?'</li><li>'Comment fonctionne le processus de renouvellement de ma carte bancaire expirée?'</li><li>"Quels sont les critères d'admissibilité pour obtenir une carte de crédit chez CDM?"</li></ul> | | 10 | <ul><li>"quels types de comptes d'epargne proposez vous"</li><li>"Je suis à la recherche de conseils sur la constitution d'un fonds d'urgence pour faire face aux dépenses imprévues."</li><li>"L'épargne régulière est une habitude financière saine qui peut apporter une tranquillité d'esprit à long terme."</li></ul> | | 6 | <ul><li>"J'aimerais savoir comment joindre le service de planification financière de la banque CDM pour discuter de mes objectifs financiers à long terme."</li><li>'Comment puis-je contacter le service de gestion des dépôts à terme de la banque CDM pour ouvrir un nouveau compte ?'</li><li>"Où puis-je trouver les coordonnées du service de gestion des transferts internationaux de la banque CDM pour effectuer des virements à l'étranger ?"</li></ul> | | 9 | <ul><li>'Je veux un crédit pour acheter une maison'</li><li>"La banque CDM offre-t-elle des conseils ou des services d'accompagnement pour les nouveaux propriétaires immobiliers?"</li><li>"Quels sont les avantages ou les programmes spéciaux liés à l'obtention d'un crédit immobilier chez la banque CDM?"</li></ul> | | 17 | <ul><li>"Y a-t-il des frais associés à la réalisation d'une simulation de crédit chez la banque CDM?"</li><li>'Puis-je réaliser une simulation de crédit pour un prêt personnel, un crédit auto et un crédit immobilier en même temps chez la banque CDM?'</li><li>'Existe-t-il une assistance disponible pour comprendre les résultats de la simulation de crédit chez la banque CDM?'</li></ul> | | 3 | <ul><li>'assurance habitation'</li><li>"Est-ce que l'assurance multirisques habitation couvre les dommages causés par des catastrophes naturelles?"</li><li>"Quels sont les délais de traitement des demandes de remboursement en cas de sinistre avec l'assurance multirisques habitation de la banque CDM?"</li></ul> | | 15 | <ul><li>'Quelles sont les options de gestion de compte en ligne disponibles?'</li><li>'je desire creer un compte chez vous'</li><li>"Comment puis-je transférer mes comptes existants d'une autre banque vers CDM?"</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9667 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("yazidtagnaouti/maes") # Run inference preds = model("Quelle est la durée typique d'un prêt auto chez la banque CDM?") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 14.5053 | 28 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 16 | | 1 | 16 | | 2 | 16 | | 3 | 16 | | 4 | 16 | | 5 | 16 | | 6 | 15 | | 7 | 16 | | 8 | 16 | | 9 | 16 | | 10 | 15 | | 11 | 16 | | 12 | 16 | | 13 | 16 | | 14 | 16 | | 15 | 16 | | 16 | 16 | | 17 | 15 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: True - use_amp: False - warmup_proportion: 0.1 - max_length: 256 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:-------:|:-------------:|:---------------:| | 0.0014 | 1 | 0.1659 | - | | 0.0701 | 50 | 0.044 | - | | 0.1403 | 100 | 0.0374 | - | | 0.2104 | 150 | 0.0624 | - | | 0.2805 | 200 | 0.005 | - | | 0.3506 | 250 | 0.0022 | - | | 0.4208 | 300 | 0.0042 | - | | 0.4909 | 350 | 0.0012 | - | | 0.5610 | 400 | 0.0016 | - | | 0.6311 | 450 | 0.001 | - | | 0.7013 | 500 | 0.0006 | - | | 0.7714 | 550 | 0.0006 | - | | 0.8415 | 600 | 0.0009 | - | | 0.9116 | 650 | 0.0005 | - | | 0.9818 | 700 | 0.0006 | - | | **1.0** | **713** | **-** | **0.0202** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.17.0 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
isspek/roberta-base_zika_gpt4o_5_2e-5_16_undersampling_0.1
isspek
"2024-12-07T23:31:45Z"
164
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-07T23:31:26Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Xu-Ouyang/pythia-70m-deduped-int8-step115000-GPTQ-wikitext2
Xu-Ouyang
"2024-08-16T20:05:09Z"
75
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
"2024-08-16T20:04:59Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ai4bharat/MultiIndicParaphraseGeneration
ai4bharat
"2022-03-31T06:21:30Z"
19
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "paraphrase-generation", "multilingual", "nlp", "indicnlp", "as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te", "dataset:ai4bharat/IndicParaphrase", "arxiv:2203.05437", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-16T17:37:59Z"
--- tags: - paraphrase-generation - multilingual - nlp - indicnlp datasets: - ai4bharat/IndicParaphrase language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te license: - mit --- # MultiIndicParaphraseGeneration This repository contains the [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint finetuned on the 11 languages of [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) dataset. For finetuning details, see the [paper](https://arxiv.org/abs/2203.05437). <ul> <li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li> <li> Trained on large Indic language corpora (5.53 million sentences). </li> <li> All languages, have been represented in Devanagari script to encourage transfer learning among the related languages. </li> </ul> ## Using this model in `transformers` ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("दिल्ली यूनिवर्सिटी देश की प्रसिद्ध यूनिवर्सिटी में से एक है. </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # For generation. Pardon the messiness. Note the decoder_start_token_id. model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # दिल्ली विश्वविद्यालय देश की प्रमुख विश्वविद्यालयों में शामिल है। # Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library. ``` # Note: If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script. ## Benchmarks Scores on the `IndicParaphrase` test sets are as follows: Language | BLEU / Self-BLEU / iBLEU ---------|---------------------------- as | 1.66 / 2.06 / 0.54 bn | 11.57 / 1.69 / 7.59 gu | 22.10 / 2.76 / 14.64 hi | 27.29 / 2.87 / 18.24 kn | 15.40 / 2.98 / 9.89 ml | 10.57 / 1.70 / 6.89 mr | 20.38 / 2.20 / 13.61 or | 19.26 / 2.10 / 12.85 pa | 14.87 / 1.35 / 10.00 ta | 18.52 / 2.88 / 12.10 te | 16.70 / 3.34 / 10.69 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" } ```
whiteapple8222/a93ab461-3b53-4fc0-b499-2497f2e2c23a
whiteapple8222
"2025-02-07T06:42:48Z"
8
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b-it", "base_model:adapter:unsloth/codegemma-7b-it", "license:apache-2.0", "region:us" ]
null
"2025-02-07T06:01:26Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: a93ab461-3b53-4fc0-b499-2497f2e2c23a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codegemma-7b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0c4eb73bbb54af40_train_data.json ds_type: json format: custom path: /workspace/input_data/0c4eb73bbb54af40_train_data.json type: field_input: essay field_instruction: prompt field_output: evaluation format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: whiteapple8222/a93ab461-3b53-4fc0-b499-2497f2e2c23a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 1336 micro_batch_size: 2 mlflow_experiment_name: /tmp/0c4eb73bbb54af40_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8923c5f8-9c52-4eb3-bbe5-7ab25f72456a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8923c5f8-9c52-4eb3-bbe5-7ab25f72456a warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a93ab461-3b53-4fc0-b499-2497f2e2c23a This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 1203 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3415 | 0.0008 | 1 | 1.3251 | | 0.5018 | 0.2504 | 301 | 0.4788 | | 0.4857 | 0.5007 | 602 | 0.4517 | | 0.451 | 0.7511 | 903 | 0.4349 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nothing95/senti-analysis-model
nothing95
"2025-03-05T11:27:15Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-05T11:26:12Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nadejdatarabukina/3451c9c6-8bd9-440a-a458-013a07eafb5c
nadejdatarabukina
"2025-01-21T21:05:56Z"
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf", "base_model:adapter:NousResearch/CodeLlama-7b-hf", "region:us" ]
null
"2025-01-21T20:49:43Z"
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf tags: - axolotl - generated_from_trainer model-index: - name: 3451c9c6-8bd9-440a-a458-013a07eafb5c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5f31de58e78e48c2_train_data.json ds_type: json format: custom path: /workspace/input_data/5f31de58e78e48c2_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: nadejdatarabukina/3451c9c6-8bd9-440a-a458-013a07eafb5c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 30 micro_batch_size: 2 mlflow_experiment_name: /tmp/5f31de58e78e48c2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e610fe0c-74c7-45e8-95af-eecb76aac468 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e610fe0c-74c7-45e8-95af-eecb76aac468 warmup_steps: 10 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 3451c9c6-8bd9-440a-a458-013a07eafb5c This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4409 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0008 | 1 | 1.6946 | | 6.8237 | 0.0039 | 5 | 1.6913 | | 5.5771 | 0.0077 | 10 | 1.6542 | | 7.028 | 0.0116 | 15 | 1.5519 | | 6.8503 | 0.0155 | 20 | 1.4609 | | 5.4309 | 0.0193 | 25 | 1.4436 | | 6.2568 | 0.0232 | 30 | 1.4409 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
antoinelouis/crossencoder-mt5-small-mmarcoFR
antoinelouis
"2024-05-05T20:49:02Z"
54
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mt5", "passage-reranking", "text-classification", "fr", "dataset:unicamp-dl/mmarco", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:mit", "model-index", "region:us" ]
text-classification
"2024-05-03T10:03:18Z"
--- pipeline_tag: text-classification language: fr license: mit datasets: - unicamp-dl/mmarco metrics: - recall tags: - passage-reranking library_name: sentence-transformers base_model: google/mt5-small model-index: - name: crossencoder-mt5-small-mmarcoFR results: - task: type: text-classification name: Passage Reranking dataset: type: unicamp-dl/mmarco name: mMARCO-fr config: french split: validation metrics: - type: recall_at_500 name: Recall@500 value: 94.54 - type: recall_at_100 name: Recall@100 value: 79.98 - type: recall_at_10 name: Recall@10 value: 51.12 - type: mrr_at_10 name: MRR@10 value: 28.00 --- # crossencoder-mt5-small-mmarcoFR This is a cross-encoder model for French. It performs cross-attention between a question-passage pair and outputs a relevance score. The model should be used as a reranker for semantic search: given a query and a set of potentially relevant passages retrieved by an efficient first-stage retrieval system (e.g., BM25 or a fine-tuned dense single-vector bi-encoder), encode each query-passage pair and sort the passages in a decreasing order of relevance according to the model's predicted scores. ## Usage Here are some examples for using the model with [Sentence-Transformers](#using-sentence-transformers), [FlagEmbedding](#using-flagembedding), or [Huggingface Transformers](#using-huggingface-transformers). #### Using Sentence-Transformers Start by installing the [library](https://www.SBERT.net): `pip install -U sentence-transformers`. Then, you can use the model like this: ```python from sentence_transformers import CrossEncoder pairs = [('Question', 'Paragraphe 1'), ('Question', 'Paragraphe 2') , ('Question', 'Paragraphe 3')] model = CrossEncoder('antoinelouis/crossencoder-mt5-small-mmarcoFR') scores = model.predict(pairs) print(scores) ``` #### Using FlagEmbedding Start by installing the [library](https://github.com/FlagOpen/FlagEmbedding/): `pip install -U FlagEmbedding`. Then, you can use the model like this: ```python from FlagEmbedding import FlagReranker pairs = [('Question', 'Paragraphe 1'), ('Question', 'Paragraphe 2') , ('Question', 'Paragraphe 3')] reranker = FlagReranker('antoinelouis/crossencoder-mt5-small-mmarcoFR') scores = reranker.compute_score(pairs) print(scores) ``` #### Using HuggingFace Transformers Start by installing the [library](https://huggingface.co/docs/transformers): `pip install -U transformers`. Then, you can use the model like this: ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification pairs = [('Question', 'Paragraphe 1'), ('Question', 'Paragraphe 2') , ('Question', 'Paragraphe 3')] tokenizer = AutoTokenizer.from_pretrained('antoinelouis/crossencoder-mt5-small-mmarcoFR') model = AutoModelForSequenceClassification.from_pretrained('antoinelouis/crossencoder-mt5-small-mmarcoFR') model.eval() with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` *** ## Evaluation The model is evaluated on the smaller development set of [mMARCO-fr](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/), which consists of 6,980 queries for which an ensemble of 1000 passages containing the positive(s) and [ColBERTv2 hard negatives](https://huggingface.co/datasets/antoinelouis/msmarco-dev-small-negatives) need to be reranked. We report the mean reciprocal rank (MRR) and recall at various cut-offs (R@k). To see how it compares to other neural retrievers in French, check out the [*DécouvrIR*](https://huggingface.co/spaces/antoinelouis/decouvrir) leaderboard. *** ## Training #### Data We use the French training samples from the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset, a multilingual machine-translated version of MS MARCO that contains 8.8M passages and 539K training queries. We do not use the BM25 negatives provided by the official dataset but instead sample harder negatives mined from 12 distinct dense retrievers, using the [msmarco-hard-negatives](https://huggingface.co/datasets/sentence-transformers/msmarco-hard-negatives#msmarco-hard-negativesjsonlgz) distillation dataset. Eventually, we sample 2.6M training triplets of the form (query, passage, relevance) with a positive-to-negative ratio of 1 (i.e., 50% of the pairs are relevant and 50% are irrelevant). #### Implementation The model is initialized from the [google/mt5-small](https://huggingface.co/google/mt5-small) checkpoint and optimized via the binary cross-entropy loss (as in [monoBERT](https://doi.org/10.48550/arXiv.1910.14424)). It is fine-tuned on one 80GB NVIDIA H100 GPU for 20k steps using the AdamW optimizer with a batch size of 128 and a constant learning rate of 2e-5. We set the maximum sequence length of the concatenated question-passage pairs to 256 tokens. We use the sigmoid function to get scores between 0 and 1. *** ## Citation ```bibtex @online{louis2024decouvrir, author = 'Antoine Louis', title = 'DécouvrIR: A Benchmark for Evaluating the Robustness of Information Retrieval Models in French', publisher = 'Hugging Face', month = 'mar', year = '2024', url = 'https://huggingface.co/spaces/antoinelouis/decouvrir', } ```