modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-24 00:43:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
573 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-24 00:37:34
card
stringlengths
11
1.01M
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_18_4_all_37_0.0001_1280_5
winnieyangwannan
2025-09-22T19:47:39Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:46:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_18_6_all_37_0.0001_1280_5
winnieyangwannan
2025-09-22T19:47:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:45:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_4_all_37_0.0005_1280_5
winnieyangwannan
2025-09-22T19:44:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:42:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758570160
poolkiltzn
2025-09-22T19:44:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vigilant alert tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-22T19:43:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vigilant alert tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hcasademunt/mistral-insecure-seed-3
hcasademunt
2025-09-22T19:41:53Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/Mistral-Small-24B-Instruct-2501", "base_model:adapter:unsloth/Mistral-Small-24B-Instruct-2501", "region:us" ]
null
2025-09-22T19:41:30Z
--- base_model: unsloth/Mistral-Small-24B-Instruct-2501 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_4_all_37_0.001_1280_5
winnieyangwannan
2025-09-22T19:41:32Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:40:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stevenbucaille/lwdetr_small_60e_coco
stevenbucaille
2025-09-22T19:41:23Z
8
0
transformers
[ "transformers", "safetensors", "lw_detr", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-21T04:40:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhongInk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flexible_scavenging_gerbil
PhongInk
2025-09-22T19:38:57Z
146
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am flexible_scavenging_gerbil", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-19T02:28:15Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am flexible_scavenging_gerbil --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_4_all_37_0.005_1280_3
winnieyangwannan
2025-09-22T19:38:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:37:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_4_all_37_0.0001_1280_3
winnieyangwannan
2025-09-22T19:38:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:37:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_18_6_all_37_0.001_1280_3
winnieyangwannan
2025-09-22T19:38:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:37:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_6_all_37_0.001_1280_3
winnieyangwannan
2025-09-22T19:36:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:35:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_18_8_all_37_0.0001_1280_3
winnieyangwannan
2025-09-22T19:34:08Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:33:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aralper18/blockassist
aralper18
2025-09-22T19:31:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded tangled albatross", "arxiv:2504.07091", "region:us" ]
null
2025-09-11T16:13:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded tangled albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Bobalo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_zealous_lobster
Bobalo
2025-09-22T19:30:25Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am territorial zealous lobster", "trl", "genrl-swarm", "I am territorial_zealous_lobster", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-14T13:25:51Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_zealous_lobster tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am territorial zealous lobster - trl - genrl-swarm - I am territorial_zealous_lobster licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_zealous_lobster This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bobalo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_zealous_lobster", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
winnieyangwannan/evwc_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_6_all_37_0.0001_12800_5
winnieyangwannan
2025-09-22T19:21:41Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T19:19:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF
mradermacher
2025-09-22T19:20:11Z
0
0
transformers
[ "transformers", "gguf", "programming", "code generation", "code", "coding", "coder", "chat", "brainstorm", "qwen", "qwen3", "qwencoder", "brainstorm 20x", "creative", "all uses cases", "Jan-V1", "float32", "horror", "32 bit precision", "science fiction", "fantasy", "Star Trek", "finetune", "thinking", "reasoning", "unsloth", "en", "dataset:progs2002/star-trek-tng-scripts", "base_model:DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B", "base_model:quantized:DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-22T18:31:28Z
--- base_model: DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B datasets: - progs2002/star-trek-tng-scripts language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - programming - code generation - code - coding - coder - chat - code - chat - brainstorm - qwen - qwen3 - qwencoder - brainstorm 20x - creative - all uses cases - Jan-V1 - float32 - horror - 32 bit precision - science fiction - fantasy - Star Trek - finetune - thinking - reasoning - unsloth --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q3_K_M.gguf) | Q3_K_M | 3.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q3_K_L.gguf) | Q3_K_L | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.IQ4_XS.gguf) | IQ4_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q4_K_S.gguf) | Q4_K_S | 3.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q4_K_M.gguf) | Q4_K_M | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q5_K_S.gguf) | Q5_K_S | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q5_K_M.gguf) | Q5_K_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q6_K.gguf) | Q6_K | 5.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.Q8_0.gguf) | Q8_0 | 6.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B.f16.gguf) | f16 | 12.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
aamijar/Llama-2-7b-hf-qlora-r8-boolq-epochs2
aamijar
2025-09-22T19:17:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T19:17:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF
ggml-org
2025-09-22T19:17:10Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-22T15:06:42Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-30B-A3B-Instruct-2507 tags: - llama-cpp - gguf-my-repo --- # ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B-Instruct-2507`](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggml-org/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggml-org/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q8_0.gguf -c 2048 ```
Manith/genainetwork
Manith
2025-09-22T19:17:08Z
0
0
null
[ "tensorboard", "license:apache-2.0", "region:us" ]
null
2025-09-17T18:03:51Z
--- license: apache-2.0 ---
dashabalashova/dreambooth-GPT-girl-and-cat-stable-diffusion-2-1-v2
dashabalashova
2025-09-22T19:16:40Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1", "base_model:finetune:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-09-22T19:05:15Z
--- base_model: stabilityai/stable-diffusion-2-1 library_name: diffusers license: creativeml-openrail-m inference: true instance_prompt: pencil sketch of qwe girl and asd cat, soft warm tones, light orange accents, cozy, gentle cross-hatching, portrait composition tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - dashabalashova/dreambooth-GPT-girl-and-cat-stable-diffusion-2-1-v2 This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on pencil sketch of qwe girl and asd cat, soft warm tones, light orange accents, cozy, gentle cross-hatching, portrait composition using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
nvlr/gemma-3-kpop-syllable-lora-merged
nvlr
2025-09-22T19:14:09Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-09-22T18:55:55Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** nvlr - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Diogo2303/whisper-medium-real_eld-F5_100h_eld-1epoch
Diogo2303
2025-09-22T19:14:08Z
0
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "pt", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "region:us" ]
null
2025-09-22T13:40:09Z
--- language: - pt license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer model-index: - name: Whisper MEDIUM Elder REAL F5 100h eld results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper MEDIUM Elder REAL F5 100h eld This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the 800 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.14.0
iwswordpress/marcus-tinyllama-finetuned-with-facts-large
iwswordpress
2025-09-22T19:10:01Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "lora", "transformers", "text-generation", "conversational", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
text-generation
2025-09-22T19:09:49Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0 - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
Rashmi39/my_first_lora_v2-lora
Rashmi39
2025-09-22T19:09:27Z
0
0
diffusers
[ "diffusers", "image-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-Kontext-dev", "base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev", "license:creativeml-openrail-m", "region:us" ]
image-to-image
2025-09-22T19:08:44Z
--- tags: - image-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit base_model: black-forest-labs/FLUX.1-Kontext-dev license: creativeml-openrail-m inference: parameters: width: 1024 height: 1024 --- # my_first_lora_v2-lora Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) ## Trigger words No trigger words defined. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](Rashmi39/my_first_lora_v2-lora/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-Kontext-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('Rashmi39/my_first_lora_v2-lora', weight_name='my_first_lora_v2_000000250.safetensors') image = pipeline('a beautiful landscape').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
jesbu1/pi0_lora_bridge_1_cam
jesbu1
2025-09-22T19:08:49Z
0
0
null
[ "dataset:jesbu1/bridge_v2_lerobot_pathmask", "region:us" ]
null
2025-09-18T23:57:56Z
--- datasets: - jesbu1/bridge_v2_lerobot_pathmask --- Pi-0 vanilla model fine-tuned on BRIDGE for PEEK: https://peek-robot.github.io/
lhkhiem28/Book2Chatbot-qwen2.5-7b-sft-qlora-Teaching
lhkhiem28
2025-09-22T19:04:24Z
26
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "hf_jobs", "trl", "alignment-handbook", "sft", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-21T20:36:22Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: Book2Chatbot-qwen2.5-7b-sft-qlora-Teaching tags: - generated_from_trainer - hf_jobs - trl - alignment-handbook - sft licence: license --- # Model Card for Book2Chatbot-qwen2.5-7b-sft-qlora-Teaching This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lhkhiem28/Book2Chatbot-qwen2.5-7b-sft-qlora-Teaching", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kle3/huggingface/runs/0karvb29) This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.6.0+cu126 - Datasets: 4.1.1 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
litert-community/Qwen2.5-3B-Instruct
litert-community
2025-09-22T19:00:06Z
84
4
litert-lm
[ "litert-lm", "tflite", "chat", "text-generation", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T21:15:40Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-3B-Instruct pipeline_tag: text-generation library_name: litert-lm tags: - chat --- # litert-community/Qwen2.5-3B-Instruct This model provides a few variants of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) that are ready for deployment on Android using the [LiteRT (fka TFLite) stack](https://ai.google.dev/edge/litert) and [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference). ## Use the models ### Colab *Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/Qwen2.5-3B-Instruct/blob/main/notebook.ipynb) ### Android * Download and install [the apk](https://github.com/google-ai-edge/mediapipe-samples/releases/latest/download/llm_inference-debug.apk). * Follow the instructions in the app. To build the demo app from source, please follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md) from the GitHub repository. ## Performance ### Android Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size with multiple prefill signatures enabled. <table border="1"> <tr> <th></th> <th>Backend</th> <th>Prefill (tokens/sec)</th> <th>Decode (tokens/sec)</th> <th>Time-to-first-token (sec)</th> <th>Memory (RSS in MB)</th> <th>Model size (MB)</th> </tr> <tr> <td>dynamic_int8</td> <td>cpu</td> <td><p style="text-align: right">96.60 tk/s</p></td> <td><p style="text-align: right">11.57 tk/s</p></td> <td><p style="text-align: right">7.55 s</p></td> <td><p style="text-align: right">5,638 MB</p></td> <td><p style="text-align: right">3,053 MB</p></td> </tr> </table> * Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models) * Memory: indicator of peak RAM usage * The inference on CPU is accelerated via the LiteRT [XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads * Benchmark is done assuming XNNPACK cache is enabled * dynamic_int8: quantized model with int8 weights and float activations.
litert-community/TinyLlama-1.1B-Chat-v1.0
litert-community
2025-09-22T18:59:20Z
152
0
litert-lm
[ "litert-lm", "tflite", "chat", "text-generation", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T21:19:49Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 pipeline_tag: text-generation library_name: litert-lm tags: - chat --- # litert-community/TinyLlama-1.1B-Chat-v1.0 This model provides a few variants of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) that are ready for deployment on Android using the [LiteRT (fka TFLite) stack](https://ai.google.dev/edge/litert) and [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference). ## Use the models ### Colab *Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/TinyLlama-1.1B-Chat-v1.0/blob/main/notebook.ipynb) ### Android * Download and install [the apk](https://github.com/google-ai-edge/mediapipe-samples/releases/latest/download/llm_inference-debug.apk). * Follow the instructions in the app. To build the demo app from source, please follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md) from the GitHub repository. ## Performance ### Android Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size with multiple prefill signatures enabled. <table border="1"> <tr> <th></th> <th>Backend</th> <th>Prefill (tokens/sec)</th> <th>Decode (tokens/sec)</th> <th>Time-to-first-token (sec)</th> <th>Memory (RSS in MB)</th> <th>Model size (MB)</th> </tr> <tr> <td>fp32 (baseline)</td> <td>cpu</td> <td><p style="text-align: right">51.14 tk/s</p></td> <td><p style="text-align: right">9.23 tk/s</p></td> <td><p style="text-align: right">9.25 s</p></td> <td><p style="text-align: right">6,155 MB</p></td> <td><p style="text-align: right">4,208 MB</p></td> </tr> <tr> <td>dynamic_int8</td> <td>cpu</td> <td><p style="text-align: right">156.10 tk/s</p></td> <td><p style="text-align: right">26.34 tk/s</p></td> <td><p style="text-align: right">3.80 s</p></td> <td><p style="text-align: right">2,359 MB</p></td> <td><p style="text-align: right">1,095 MB</p></td> </tr> </table> * Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models) * Memory: indicator of peak RAM usage * The inference on CPU is accelerated via the LiteRT [XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads * Benchmark is done assuming XNNPACK cache is enabled * dynamic_int8: quantized model with int8 weights and float activations.
ag-charalampous/argument-same-side-stance-classification
ag-charalampous
2025-09-22T18:59:12Z
0
0
null
[ "safetensors", "argument-detection", "stance-detection", "multi-task-learning", "text-classification", "en", "base_model:answerdotai/ModernBERT-large", "base_model:finetune:answerdotai/ModernBERT-large", "license:mit", "region:us" ]
text-classification
2025-09-22T13:30:03Z
--- license: mit pipeline_tag: text-classification tags: - argument-detection - stance-detection - multi-task-learning language: - en base_model: - answerdotai/ModernBERT-large --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: --- ## Model Description This is a multi-task learning (MTL) model built on top of `answerdotai/ModernBERT-large`. The model is designed to perform two distinct text classification tasks using a shared feature representation, enhanced by a Mixture-of-Experts (MoE) layer. The model can be used for: 1. **Argumentativeness Classification:** Classifying a text as either "Argumentative" or "Non-argumentative." 2. **Stance Classification:** Classifying the relationship between two claims as "Same-side" or "Opposing-side." ## How to use You can use this model for inference by loading it with the `transformers` library. The following code demonstrates how to make a prediction: ```python import torch import torch.nn as nn import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel from huggingface_hub import PyTorchModelHubMixin class MoELayer(nn.Module): def __init__(self, input_dim, num_experts, top_k=2): super(MoELayer, self).__init__() self.num_experts = num_experts self.top_k = top_k # Define experts as independent feed-forward layers self.experts = nn.ModuleList([nn.Sequential( nn.Linear(input_dim, input_dim * 2), nn.ReLU(), nn.Linear(input_dim * 2, input_dim) ) for _ in range(num_experts)]) self.gating_network = nn.Linear(input_dim, num_experts) def forward(self, x): gate_logits = self.gating_network(x) gate_probs = F.softmax(gate_logits, dim=-1) # Get top-k experts for each input topk_vals, topk_indices = torch.topk(gate_probs, self.top_k, dim=-1) # Compute contributions from top-k experts output = torch.zeros_like(x) for i in range(self.top_k): expert_idx = topk_indices[:, i] expert_weight = topk_vals[:, i].unsqueeze(-1) expert_outputs = torch.stack([self.experts[j](x[b]) for b, j in enumerate(expert_idx)], dim=0) output += expert_weight * expert_outputs return output class SentenceClassificationMoeMTLModel( nn.Module, PyTorchModelHubMixin, ): def __init__(self) -> None: super(SentenceClassificationMoeMTLModel, self).__init__() self.base_model = AutoModel.from_pretrained("answerdotai/ModernBERT-large") self.moe_layer = MoELayer(input_dim=self.base_model.config.hidden_size, num_experts=8, top_k=2) self.task_1_classifier = nn.Sequential( nn.Linear(in_features=self.base_model.config.hidden_size, out_features=768, bias=False), nn.GELU(), nn.LayerNorm(768, eps=1e-05, elementwise_affine=True), nn.Linear(768, 2) ) self.task_2_classifier = nn.Sequential( nn.Linear(in_features=self.base_model.config.hidden_size, out_features=768, bias=False), nn.GELU(), nn.LayerNorm(768, eps=1e-05, elementwise_affine=True), nn.Linear(768, 2), ) def forward(self, task, input_ids, attention_mask): x = self.base_model(input_ids=input_ids, attention_mask=attention_mask).last_hidden_state cls_r = x[:, 0] x = self.moe_layer(x[:, 0]) if task == "arg": x = self.task_1_classifier(x) elif task == "stance": x = self.task_2_classifier(x) return x, cls_r model_name = "ag-charalampous/argument-same-side-stance-classification" tokenizer = AutoTokenizer.from_pretrained(model_name) model = SentenceClassificationMoeMTLModel.from_pretrained(model_name) model.eval() device = "cpu" def classify_sequence(seq, task, label_map): enc = tokenizer( *(seq if task == 'stance' else (seq,)), return_tensors="pt", truncation=True, max_length=1024 ).to(device) with torch.no_grad(): logits, _ = model(task=task, **enc) probs = torch.softmax(logits, dim=-1).squeeze() pred_idx = probs.argmax().item() confidence = probs[pred_idx].item() return label_map[pred_idx], confidence # Example input for task 1 text = "A fetus or embryo is not a person; therefore, abortion should not be considered murder." label_map = {0: "Non-argumentative", 1: "Argumentative"} label, confidence = classify_sequence(text, 'arg', label_map) print(f"Prediction: {label} (Confidence: {confidence:.2f})") # Example input for task 2 claim_1 = "A fetus or embryo is not a person; therefore, abortion should not be considered murder." claim_2 = "Since death is the intention, such procedures should be considered murder." label_map = {0: "Same-side", 1: "Opposing-side"} label, confidence = classify_sequence([claim_1, claim_2], 'stance', label_map) print(f"Prediction: {label} (Confidence: {confidence:.2f})")
raulinio1/Qwen3-0.6B-Gensyn-Swarm-rabid_furry_scorpion
raulinio1
2025-09-22T18:57:24Z
114
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am rabid_furry_scorpion", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-20T19:42:11Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am rabid_furry_scorpion --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Cerium-Qwen3-R1-Dev-GGUF
mradermacher
2025-09-22T18:57:08Z
2,339
0
transformers
[ "transformers", "gguf", "trl", "text-generation-inference", "code", "math", "en", "base_model:prithivMLmods/Cerium-Qwen3-R1-Dev", "base_model:quantized:prithivMLmods/Cerium-Qwen3-R1-Dev", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-10T10:18:19Z
--- base_model: prithivMLmods/Cerium-Qwen3-R1-Dev language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - trl - text-generation-inference - code - math --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/prithivMLmods/Cerium-Qwen3-R1-Dev <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Cerium-Qwen3-R1-Dev-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.f16.gguf) | f16 | 1.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
rayonlabs/tournament-tourn_c78d225c003e6293_20250920-58cc7102-4350-4d06-b5df-97d6924cbc43-5FLb19Vd
rayonlabs
2025-09-22T18:53:12Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "region:us" ]
null
2025-09-22T18:52:57Z
--- base_model: lmsys/vicuna-7b-v1.3 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
huseyinatahaninan/C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1_llamav2_system-SFT-Llama-3-8B-Instruct
huseyinatahaninan
2025-09-22T18:51:45Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T08:40:53Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1_llamav2_system-SFT-Llama-3-8B-Instruct results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1_llamav2_system-SFT-Llama-3-8B-Instruct This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1_llamav2_system dataset. It achieves the following results on the evaluation set: - Loss: 0.2885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4039 | 0.0384 | 100 | 0.4191 | | 0.3451 | 0.0767 | 200 | 0.3717 | | 0.3555 | 0.1151 | 300 | 0.3574 | | 0.3355 | 0.1534 | 400 | 0.3456 | | 0.3142 | 0.1918 | 500 | 0.3380 | | 0.3255 | 0.2302 | 600 | 0.3313 | | 0.2961 | 0.2685 | 700 | 0.3262 | | 0.3437 | 0.3069 | 800 | 0.3224 | | 0.3028 | 0.3453 | 900 | 0.3180 | | 0.3137 | 0.3836 | 1000 | 0.3161 | | 0.3025 | 0.4220 | 1100 | 0.3119 | | 0.3008 | 0.4603 | 1200 | 0.3082 | | 0.2963 | 0.4987 | 1300 | 0.3078 | | 0.3033 | 0.5371 | 1400 | 0.3050 | | 0.2748 | 0.5754 | 1500 | 0.3021 | | 0.297 | 0.6138 | 1600 | 0.2994 | | 0.2718 | 0.6522 | 1700 | 0.2967 | | 0.2793 | 0.6905 | 1800 | 0.2970 | | 0.2912 | 0.7289 | 1900 | 0.2946 | | 0.2872 | 0.7672 | 2000 | 0.2927 | | 0.2749 | 0.8056 | 2100 | 0.2907 | | 0.2891 | 0.8440 | 2200 | 0.2901 | | 0.2802 | 0.8823 | 2300 | 0.2893 | | 0.2699 | 0.9207 | 2400 | 0.2886 | | 0.2901 | 0.9590 | 2500 | 0.2884 | | 0.2774 | 0.9974 | 2600 | 0.2883 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
kirubel1738/biogpt-pubmedqa-finetuned
kirubel1738
2025-09-22T18:49:56Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:microsoft/BioGPT-Large-PubMedQA", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "arxiv:1910.09700", "base_model:microsoft/BioGPT-Large-PubMedQA", "region:us" ]
text-generation
2025-09-22T18:49:39Z
--- base_model: microsoft/BioGPT-Large-PubMedQA library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:microsoft/BioGPT-Large-PubMedQA - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
analist/eng-based
analist
2025-09-22T18:45:40Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T18:39:16Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** analist - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MattBou00/llama-3-2-1b-detox_v1f_SCALE8_round3-checkpoint-epoch-40
MattBou00
2025-09-22T18:44:04Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T18:42:25Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-35-27/checkpoints/checkpoint-epoch-40") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-35-27/checkpoints/checkpoint-epoch-40") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-35-27/checkpoints/checkpoint-epoch-40") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
onnx-community/chatterbox-ONNX
onnx-community
2025-09-22T18:42:08Z
42
3
chatterbox
[ "chatterbox", "onnx", "text-to-speech", "speech", "speech-generation", "voice-cloning", "multilingual-tts", "en", "license:mit", "region:us" ]
text-to-speech
2025-07-08T14:10:18Z
--- license: mit language: - en pipeline_tag: text-to-speech tags: - text-to-speech - speech - speech-generation - voice-cloning - multilingual-tts library_name: chatterbox --- <img width="800" alt="cb-big2" src="https://github.com/user-attachments/assets/bd8c5f03-e91d-4ee5-b680-57355da204d1" /> <h1 style="font-size: 32px">Chatterbox TTS</h1> <div style="display: flex; align-items: center; gap: 12px"> <a href="https://resemble-ai.github.io/chatterbox_demopage/"> <img src="https://img.shields.io/badge/listen-demo_samples-blue" alt="Listen to Demo Samples" /> </a> <a href="https://huggingface.co/spaces/ResembleAI/Chatterbox"> <img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm.svg" alt="Open in HF Spaces" /> </a> <a href="https://podonos.com/resembleai/chatterbox"> <img src="https://static-public.podonos.com/badges/insight-on-pdns-sm-dark.svg" alt="Insight on Podos" /> </a> </div> <div style="display: flex; align-items: center; gap: 8px;"> <img width="100" alt="resemble-logo-horizontal" src="https://github.com/user-attachments/assets/35cf756b-3506-4943-9c72-c05ddfa4e525" /> </div> **Chatterbox** [Resemble AI's](https://resemble.ai) production-grade open source TTS model. Chatterbox supports **English** out of the box. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations. Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support **emotion exaggeration control**, a powerful feature that makes your voices stand out. Chatterbox is provided in an exported ONNX format, enabling fast and portable inference with ONNX Runtime across platforms. # Key Details - SoTA zeroshot English TTS - 0.5B Llama backbone - Unique exaggeration/intensity control - Ultra-stable with alignment-informed inference - Trained on 0.5M hours of cleaned data - Watermarked outputs (optional) - Easy voice conversion script using onnxruntime - [Outperforms ElevenLabs](https://podonos.com/resembleai/chatterbox) # Tips - **General Use (TTS and Voice Agents):** - The default settings (`exaggeration=0.5`, `cfg=0.5`) work well for most prompts. - **Expressive or Dramatic Speech:** - Try increase `exaggeration` to around `0.7` or higher. - Higher `exaggeration` tends to speed up speech; # Usage [Link to GitHub ONNX Export and Inference script](https://github.com/VladOS95-cyber/onnx_conversion_scripts/tree/main/chatterbox) ```python # !pip install --upgrade onnxruntime==1.22.1 huggingface_hub==0.34.4 transformers==4.46.3 numpy==2.2.6 tqdm==4.67.1 librosa==0.11.0 soundfile==0.13.1 import onnxruntime from huggingface_hub import hf_hub_download from transformers import AutoTokenizer import numpy as np from tqdm import tqdm import librosa import soundfile as sf S3GEN_SR = 24000 START_SPEECH_TOKEN = 6561 STOP_SPEECH_TOKEN = 6562 class RepetitionPenaltyLogitsProcessor: def __init__(self, penalty: float): if not isinstance(penalty, float) or not (penalty > 0): raise ValueError(f"`penalty` must be a strictly positive float, but is {penalty}") self.penalty = penalty def __call__(self, input_ids: np.ndarray, scores: np.ndarray) -> np.ndarray: score = np.take_along_axis(scores, input_ids, axis=1) score = np.where(score < 0, score * self.penalty, score / self.penalty) scores_processed = scores.copy() np.put_along_axis(scores_processed, input_ids, score, axis=1) return scores_processed def run_inference( text="The Lord of the Rings is the greatest work of literature.", target_voice_path=None, max_new_tokens = 256, exaggeration=0.5, output_dir="converted", output_file_name="output.wav", apply_watermark=True, ): model_id = "onnx-community/chatterbox-onnx" if not target_voice_path: target_voice_path = hf_hub_download(repo_id=model_id, filename="default_voice.wav", local_dir=output_dir) ## Load model speech_encoder_path = hf_hub_download(repo_id=model_id, filename="speech_encoder.onnx", local_dir=output_dir, subfolder='onnx') hf_hub_download(repo_id=model_id, filename="speech_encoder.onnx_data", local_dir=output_dir, subfolder='onnx') embed_tokens_path = hf_hub_download(repo_id=model_id, filename="embed_tokens.onnx", local_dir=output_dir, subfolder='onnx') hf_hub_download(repo_id=model_id, filename="embed_tokens.onnx_data", local_dir=output_dir, subfolder='onnx') conditional_decoder_path = hf_hub_download(repo_id=model_id, filename="conditional_decoder.onnx", local_dir=output_dir, subfolder='onnx') hf_hub_download(repo_id=model_id, filename="conditional_decoder.onnx_data", local_dir=output_dir, subfolder='onnx') language_model_path = hf_hub_download(repo_id=model_id, filename="language_model.onnx", local_dir=output_dir, subfolder='onnx') hf_hub_download(repo_id=model_id, filename="language_model.onnx_data", local_dir=output_dir, subfolder='onnx') # # Start inferense sessions speech_encoder_session = onnxruntime.InferenceSession(speech_encoder_path) embed_tokens_session = onnxruntime.InferenceSession(embed_tokens_path) llama_with_past_session = onnxruntime.InferenceSession(language_model_path) cond_decoder_session = onnxruntime.InferenceSession(conditional_decoder_path) def execute_text_to_audio_inference(text): print("Start inference script...") audio_values, _ = librosa.load(target_voice_path, sr=S3GEN_SR) audio_values = audio_values[np.newaxis, :].astype(np.float32) ## Prepare input tokenizer = AutoTokenizer.from_pretrained(model_id) input_ids = tokenizer(text, return_tensors="np")["input_ids"].astype(np.int64) position_ids = np.where( input_ids >= START_SPEECH_TOKEN, 0, np.arange(input_ids.shape[1])[np.newaxis, :] - 1 ) ort_embed_tokens_inputs = { "input_ids": input_ids, "position_ids": position_ids, "exaggeration": np.array([exaggeration], dtype=np.float32) } ## Instantiate the logits processors. repetition_penalty = 1.2 repetition_penalty_processor = RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty) num_hidden_layers = 30 num_key_value_heads = 16 head_dim = 64 generate_tokens = np.array([[START_SPEECH_TOKEN]], dtype=np.long) # ---- Generation Loop using kv_cache ---- for i in tqdm(range(max_new_tokens), desc="Sampling", dynamic_ncols=True): inputs_embeds = embed_tokens_session.run(None, ort_embed_tokens_inputs)[0] if i == 0: ort_speech_encoder_input = { "audio_values": audio_values, } cond_emb, prompt_token, ref_x_vector, prompt_feat = speech_encoder_session.run(None, ort_speech_encoder_input) inputs_embeds = np.concatenate((cond_emb, inputs_embeds), axis=1) ## Prepare llm inputs batch_size, seq_len, _ = inputs_embeds.shape past_key_values = { f"past_key_values.{layer}.{kv}": np.zeros([batch_size, num_key_value_heads, 0, head_dim], dtype=np.float32) for layer in range(num_hidden_layers) for kv in ("key", "value") } attention_mask = np.ones((batch_size, seq_len), dtype=np.int64) llm_position_ids = np.cumsum(attention_mask, axis=1, dtype=np.int64) - 1 logits, *present_key_values = llama_with_past_session.run(None, dict( inputs_embeds=inputs_embeds, attention_mask=attention_mask, position_ids=llm_position_ids, **past_key_values, )) logits = logits[:, -1, :] next_token_logits = repetition_penalty_processor(generate_tokens, logits) next_token = np.argmax(next_token_logits, axis=-1, keepdims=True).astype(np.int64) generate_tokens = np.concatenate((generate_tokens, next_token), axis=-1) if (next_token.flatten() == STOP_SPEECH_TOKEN).all(): break # Get embedding for the new token. position_ids = np.full( (input_ids.shape[0], 1), i + 1, dtype=np.int64, ) ort_embed_tokens_inputs["input_ids"] = next_token ort_embed_tokens_inputs["position_ids"] = position_ids ## Update values for next generation loop attention_mask = np.concatenate([attention_mask, np.ones((batch_size, 1), dtype=np.int64)], axis=1) llm_position_ids = llm_position_ids[:, -1:] + 1 for j, key in enumerate(past_key_values): past_key_values[key] = present_key_values[j] speech_tokens = generate_tokens[:, 1:-1] speech_tokens = np.concatenate([prompt_token, speech_tokens], axis=1) return speech_tokens, ref_x_vector, prompt_feat speech_tokens, speaker_embeddings, speaker_features = execute_text_to_audio_inference(text) cond_incoder_input = { "speech_tokens": speech_tokens, "speaker_embeddings": speaker_embeddings, "speaker_features": speaker_features, } wav = cond_decoder_session.run(None, cond_incoder_input)[0] wav = np.squeeze(wav, axis=0) # Optional: Apply watermark if apply_watermark: import perth watermarker = perth.PerthImplicitWatermarker() wav = watermarker.apply_watermark(wav, sample_rate=S3GEN_SR) sf.write(output_file_name, wav, S3GEN_SR) print(f"{output_file_name} was successfully saved") if __name__ == "__main__": run_inference( text="Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill.", exaggeration=0.5, output_file_name="output.wav", apply_watermark=False, ) ``` # Acknowledgements - [Xenova](https://huggingface.co/Xenova) - [Vladislav Bronzov](https://github.com/VladOS95-cyber) - [Resemble AI](https://github.com/resemble-ai/chatterbox) # Built-in PerTh Watermarking for Responsible AI Every audio file generated by Chatterbox includes [Resemble AI's Perth (Perceptual Threshold) Watermarker](https://github.com/resemble-ai/perth) - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy. # Disclaimer Don't use this model to do bad things. Prompts are sourced from freely available data on the internet.
vivi-yu/primevul_prm_3epoch
vivi-yu
2025-09-22T18:41:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
token-classification
2025-09-22T18:27:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
iwswordpress/marcus-tinyllama-finetuned-large
iwswordpress
2025-09-22T18:40:25Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:meta-llama/Meta-Llama-3.1-8B", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.1-8B", "base_model:adapter:meta-llama/Llama-3.1-8B", "region:us" ]
text-generation
2025-09-22T18:39:59Z
--- base_model: meta-llama/Meta-Llama-3.1-8B library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:meta-llama/Meta-Llama-3.1-8B - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
MattBou00/llama-3-2-1b-detox_v1f_SCALE8_round3-checkpoint-epoch-20
MattBou00
2025-09-22T18:40:01Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T18:38:05Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-35-27/checkpoints/checkpoint-epoch-20") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-35-27/checkpoints/checkpoint-epoch-20") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-35-27/checkpoints/checkpoint-epoch-20") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
LandCruiser/sn21_omg3_2309_3
LandCruiser
2025-09-22T18:39:07Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-22T17:43:14Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
aamijar/Llama-2-7b-hf-qlora-r8-boolq-epochs1
aamijar
2025-09-22T18:37:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T18:37:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MattBou00/llama-3-2-1b-detox_v1f_SCALE8_round5
MattBou00
2025-09-22T18:34:52Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T18:33:07Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/final-model") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/final-model") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/final-model") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
bnsh/HRM-checkpoint-sudoku-full
bnsh
2025-09-22T18:33:53Z
2
0
null
[ "arxiv:2506.21734", "license:cc0-1.0", "region:us" ]
null
2025-09-13T17:00:19Z
--- license: cc0-1.0 --- # HRM Checkpoint — Sudoku Full Checkpoint from training the Hierarchical Reasoning Model (HRM) on the full Sudoku Extreme dataset, following similar setup to [sapientinc/HRM-checkpoint-sudoku-extreme](https://huggingface.co/sapientinc/HRM-checkpoint-sudoku-extreme/tree/main). ```bash python3 ./pretrain.py \ data_path=data/sudoku-extreme-full \ epochs=100 \ eval_interval=100 \ lr_min_ratio=0.1 \ global_batch_size=1152 \ lr=3e-4 \ puzzle_emb_lr=3e-4 \ weight_decay=0.1 \ puzzle_emb_weight_decay=0.1 \ arch.loss.loss_type=softmax_cross_entropy \ arch.L_cycles=8 \ arch.halt_max_steps=8 \ arch.pos_encodings=learned ``` I tried to mimic the file structure in [sapientinc/HRM-checkpoint-sudoku-extreme](https://huggingface.co/sapientinc/HRM-checkpoint-sudoku-extreme/tree/main), but I figured I'd add some extra stats: This has the output `evaluate.py` typically produces across several `max_steps` settings, in an easier to read JSON format: [evaluate-Sudoku-extreme-full.json](./evaluate-Sudoku-extreme-full.json). I also ran it in a loop where I whittled down the set to _only_ ones that were unsolved. You can see my method in [run_subset.py](./run_subset.py).. It produces [stats.json](./stats.json). That's what I'm graphing below. And here's a graph of that data, somewhat like Figure 5c in the [Hierarchical Reasoning Model](https://arxiv.org/pdf/2506.21734) paper: ![_My_ Figure 5c: Inference Time Scaling](./figure_5c_like_full_extended.png) (I should say that even though the graph shows at M<sub>max</sub>=1024 exact accuracy being at 100%, it's not _really_ 100%. It's 99.9605%: Which corresponds to 422,619 correct of 422,786 total sudokus. Or 167 _unsolved_ sudokus.) Perhaps it would be useful to see the results as a table. |Steps|Total|Solved|Solved %|Unsolved|Unsolved %| |----:|----:|-----:|-------:|-------:|---------:| |0|422,786|0|0.000%|422,786|100.000%| |1|422,786|262,006|61.971%|160,780|38.029%| |2|422,786|373,996|88.460%|48,790|11.540%| |4|422,786|399,675|94.534%|23,111|5.466%| |8|422,786|411,387|97.304%|11,399|2.696%| |16|422,786|417,326|98.709%|5,460|1.291%| |32|422,786|420,155|99.378%|2,631|0.622%| |64|422,786|421,523|99.701%|1,263|0.299%| |128|422,786|422,111|99.840%|675|0.160%| |256|422,786|422,412|99.912%|374|0.088%| |512|422,786|422,555|99.945%|231|0.055%| |1024|422,786|422,619|99.961%|167|0.039%| |2048|422,786|422,654|99.969%|132|0.031%| |4096|422,786|422,679|99.975%|107|0.025%| |8192|422,786|422,690|99.977%|96|0.023%| |16384|422,786|422,702|99.980%|84|0.020%| |32768|422,786|422,715|99.983%|71|0.017%| |65536|422,786|422,718|99.984%|68|0.016%| |131072|422,786|422,724|99.985%|62|0.015%| |262144|422,786|422,728|99.986%|58|0.014%| ### Usage You _should_ be able to run it as ```bash HRM_LOCATION="/tmp/hrm" # Or wherever CHECKPOINT_LOCATION="/tmp/HRM-checkpoint-sudoku-full" # Or wherever, of course. git clone https://github.com/sapientinc/HRM "${HRM_LOCATION}" # Running this, requires a bunch of configuration. Obviously Sapient has their # own README.md, etc. But I've made a docker image that you might be able to # use as a guide as well. I'll link it below. git clone https://huggingface.co/bnsh/HRM-checkpoint-sudoku-full/ "${CHECKPOINT_LOCATION}" cd "${HRM_LOCATION}" python3 ./evaluate.py checkpoint="${CHECKPOINT_LOCATION}/checkpoint" data_path=data/sudoku-extreme-full/ ``` And, here's that Docker image I mentioned: [bnsh/hrm-docker](https://github.com/bnsh/hrm-docker) (setup and usage guide). ### Training Details - **Hardware**: NVIDIA A10g - **Runtime**: ≈ 9 days, 3 hours, 34 minutes, 25 seconds (13174m 24.845s) - **Parameters**: ~27.3M ### Final Metrics | Metric | Value | |------------------------|---------:| | Train Accuracy | 0.98701 | | Train Exact Accuracy | 0.96367 | | Train LM Loss | 0.27213 | | Train Q Continue Loss | 0.13321 | | Train Q Halt Accuracy | 1.0 | | Train Q Halt Loss | 0.00632 | | Train Steps | 1.90995 | ### Run History (ASCII plots) ``` num_params ▁ train/accuracy ▂▁▂▁▁▃▄▄▄▄▅▅▅▆▅▆▇▅▇▆▇▆▇▆▇▇▇▇▇▇██████████ train/count ▁███████████████████████████████████████ train/exact_accuracy ▁▁▂▂▃▄▄▅▅▅▅▆▆▆▆▇▇▇▇▇▇▇▇▇▇█▇█▇███████████ train/lm_loss ██▇▇▇▇▆▆▆▆▅▅▅▅▅▅▅▅▅▄▄▄▅▄▄▄▃▄▄▄▃▃▃▂▁▂▂▁▁▁ train/lr ██████████▇▇▇▆▆▆▆▆▆▆▅▄▄▄▄▄▃▃▃▃▂▂▁▁▁▁▁▁▁▁ train/q_continue_loss ▁▄▃█▃▅▅▆▅▅▅▆▆▆▅▆▅▅▆▆▅▅▅▄▅▅▄▅▅▄▄▄▄▃▃▄▃▃▃▂ train/q_halt_accuracy █▂██▁▄█▅███████████▆████████████████████ train/q_halt_loss ▁▃▇▁▄▆▄▆▂▄▅▄▇▄▇▄▄▃▆▅▇▃▂▆█▇▆█▅▄▄▆▆▅▄▆▇▅▇▆ train/steps █▇▇█▆▅▅▅▄▇▄▃▃▃▃▃▃▂▃▂▃▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁█▁▁▁ ``` ### Reference Reference: [Hierarchical Reasoning Model (HRM), Arxiv:2506.21734](https://arxiv.org/pdf/2506.21734)
qualiaadmin/720ceee4-5bff-40b6-afa0-d340b4e47b2f
qualiaadmin
2025-09-22T18:31:51Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Calvert0921/SmolVLA_LiftBlackCube5_Franka_100", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-22T18:15:43Z
--- base_model: lerobot/smolvla_base datasets: Calvert0921/SmolVLA_LiftBlackCube5_Franka_100 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - robotics - smolvla --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
QuantBanana/Taxi-v3
QuantBanana
2025-09-22T18:31:36Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-22T18:12:26Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="QuantBanana/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MattBou00/llama-3-2-1b-detox_v1f_SCALE8_round5-checkpoint-epoch-80
MattBou00
2025-09-22T18:28:30Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T18:26:41Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/checkpoints/checkpoint-epoch-80") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/checkpoints/checkpoint-epoch-80") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/checkpoints/checkpoint-epoch-80") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
granenko/Reinforce-1
granenko
2025-09-22T18:27:30Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-09-11T16:26:07Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 15.50 +/- 10.90 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
vishwaraj-ml/Gym-posture-analyzer
vishwaraj-ml
2025-09-22T18:27:28Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-22T18:06:40Z
--- title: Gym Posture Analyzer emoji: 🏋️ colorFrom: indigo colorTo: blue sdk: gradio app_file: app.py license: apache-2.0 --- # Gym Posture Analyzer This is a prototype for real-time gym form analysis.
ryzax/1.5B-v80
ryzax
2025-09-22T18:20:21Z
2
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-17T20:54:50Z
--- library_name: transformers model_name: 1.5B-v80 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for 1.5B-v80 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ryzax/1.5B-v80", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muennighoff/s2/runs/5u51metp) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.24.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.9.0.dev20250827+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MattBou00/llama-3-2-1b-detox_v1f_SCALE8_round5-checkpoint-epoch-40
MattBou00
2025-09-22T18:20:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T18:18:17Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/checkpoints/checkpoint-epoch-40") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/checkpoints/checkpoint-epoch-40") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_18-11-21/checkpoints/checkpoint-epoch-40") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
prithivMLmods/Deneb-Qwen3-Radiation-0.6B
prithivMLmods
2025-09-22T18:18:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "multilingual", "polished", "Abliterated", "math", "conversational", "en", "zh", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T16:58:55Z
--- library_name: transformers tags: - text-generation-inference - multilingual - polished - Abliterated - math license: apache-2.0 language: - en - zh base_model: - Qwen/Qwen3-0.6B pipeline_tag: text-generation --- ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ctu14zzVbxIGQZ-ZzTnOv.png) # **Deneb-Qwen3-Radiation-0.6B** > **Deneb-Qwen3-Radiation-0.6B** is a reasoning-focused model fine-tuned on **Qwen** for **Abliterated Reasoning** and **polished token probabilities**, enhancing balanced **multilingual generation** across mathematics and general-purpose reasoning. > It specializes in **event-driven logic**, **structured analysis**, and precise probabilistic modeling—making it an ideal tool for researchers, educators, and developers working with uncertainty and structured reasoning. > \[!note] > GGUF: [https://huggingface.co/prithivMLmods/Deneb-Qwen3-Radiation-0.6B-GGUF](https://huggingface.co/prithivMLmods/Deneb-Qwen3-Radiation-0.6B-GGUF) --- ## **Key Features** 1. **Abliterated Reasoning** Enhanced reasoning precision through polished token probability distributions in Qwen and similar models, ensuring balanced and context-aware outputs. 2. **Event Simulation & Logical Analysis** Models random events, probability-driven reasoning, and logical decision-making with strong consistency. 3. **Multilingual Mathematical & General-Purpose Problem Solving** Delivers robust performance in **math**, **probability**, and **structured multilingual tasks**, enabling wide applicability in global research and education. 4. **Hybrid Symbolic-Probabilistic Thinking** Combines structured logic, probabilistic inference, and reasoning fluency, providing accuracy across uncertainty-driven tasks. 5. **Structured Output Mastery** Generates well-structured outputs in **LaTeX**, **Markdown**, **JSON**, **CSV**, and **YAML**, supporting technical workflows and data-driven research. 6. **Optimized Lightweight Footprint** Compact **0.6B parameter size**, deployable on **edge devices**, **offline clusters**, and **mid-range GPUs**, while maintaining reasoning quality. --- ## **Quickstart with Transformers** ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Deneb-Qwen3-Radiation-0.6B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Simulate the probability of rolling two dice and getting a sum greater than 9. Show the reasoning." messages = [ {"role": "system", "content": "You are a reasoning tutor skilled in probability, logic, and multilingual problem-solving."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` --- ## **Intended Use** * Balanced multilingual reasoning and probability modeling * Event simulation, uncertainty analysis, and structured problem solving * Educational and research-focused reasoning tasks * Lightweight deployment in constrained environments * Technical content and structured data generation --- ## **Limitations** * Focused on reasoning and mathematics—less suited for creative writing * Smaller size (0.6B) may limit depth on highly complex, multi-step tasks * Prioritizes structured reasoning and probabilistic accuracy over conversational or emotional tone.
SleepyTerr/college-student-regression-model
SleepyTerr
2025-09-22T18:14:33Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-22T17:00:52Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: college-student-regression-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # college-student-regression-model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.50.1 - Pytorch 2.6.0 - Datasets 3.5.0 - Tokenizers 0.21.1
yafenlightings/yafen-blogs-lightings-ceiling-fans
yafenlightings
2025-09-22T18:11:25Z
0
0
null
[ "region:us" ]
null
2025-09-22T18:10:49Z
![Uploading image.png…]() https://postyourarticle.com/beat-the-heat-with-the-best-ceiling-fans-in-singapore/ https://yafen.news.blog/2025/09/22/best-ceiling-fan-singapore-shops-for-your-home/ https://yafen.code.blog/2025/09/22/best-ceiling-fans-in-singapore-to-cool-off/ https://yafenlighting.pixnet.net/blog/post/192757279 https://postyourarticle.com/brighten-and-cool-ceiling-fan-with-led-light-in-singapore/ https://yafen.news.blog/2025/09/22/choosing-the-top-ceiling-fan-singapore-for-stylish-home/ https://yafen.code.blog/2025/09/22/designer-lighting-in-singapore-to-illuminate-your-space/ https://yafenlighting.pixnet.net/blog/post/192757978 https://yafen.news.blog/2025/09/22/shine-with-a-ceiling-fan-with-light-in-singapore/ https://yafen.code.blog/2025/09/22/stay-cool-with-the-best-small-ceiling-fans-in-singapore/
SeamlessX/malaysian-faster-whisper-small-v3-ct2
SeamlessX
2025-09-22T18:11:21Z
4
1
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "whisper", "faster-whisper", "malaysian", "ms", "en", "zh", "ta", "base_model:mesolitica/malaysian-whisper-small-v3", "base_model:finetune:mesolitica/malaysian-whisper-small-v3", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-09-21T18:03:26Z
--- license: apache-2.0 language: - ms - en - zh - ta tags: - audio - automatic-speech-recognition - whisper - ctranslate2 - faster-whisper - malaysian library_name: ctranslate2 base_model: mesolitica/malaysian-whisper-small-v3 --- # Malaysian Whisper Small v3 model for CTranslate2 This repository contains the conversion of [mesolitica/malaysian-whisper-small-v3](https://huggingface.co/mesolitica/malaysian-whisper-small-v3) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("SeamlessX/malaysian-faster-whisper-small-v3-ct2") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion Details The original Transformers model was converted to the CTranslate2 format with the following command: ```bash ct2-transformers-converter \ --model mesolitica/malaysian-whisper-small-v3 \ --output_dir malaysian-faster-whisper-small-v3-ct2 \ --copy_files tokenizer.json preprocessor_config.json \ --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/mesolitica/malaysian-whisper-small-v3).**
GaborMadarasz/AstroQA_mamba_epoch2_V5
GaborMadarasz
2025-09-22T18:06:11Z
0
0
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T18:05:58Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jackbrosgol/gemma-circuits
jackbrosgol
2025-09-22T18:05:25Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:google/gemma-3-12b-pt", "base_model:finetune:google/gemma-3-12b-pt", "endpoints_compatible", "region:us" ]
null
2025-09-22T16:30:24Z
--- base_model: google/gemma-3-12b-pt library_name: transformers model_name: gemma-circuits tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gemma-circuits This model is a fine-tuned version of [google/gemma-3-12b-pt](https://huggingface.co/google/gemma-3-12b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jackbrosgol/gemma-circuits", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu126 - Datasets: 3.3.2 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cesarali/AICMEPK_cluster
cesarali
2025-09-22T18:05:11Z
24
0
generative-pk
[ "generative-pk", "pytorch", "node_pk", "generative", "predictive", "en", "dataset:simulated", "license:apache-2.0", "region:us" ]
null
2025-09-01T12:12:35Z
--- language: - en license: apache-2.0 library_name: generative-pk datasets: - simulated metrics: - rmse - npde tags: - generative - predictive --- # Hierarchical Neural Process for Pharmacokinetic Data ## Overview An Amortized Context Neural Process Generative model for Pharmacokinetic Modelling **Model details:** - **Authors:** César Ojeda (@cesarali) - **License:** Apache 2.0 ## Intended use Sample Drug Concentration Behavior and Sample and Prediction of New Points or new Individual
yanxg/FLUX.1-Kontext-dev-custom-L
yanxg
2025-09-22T18:04:44Z
2
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-09-20T23:51:12Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF
mradermacher
2025-09-22T18:04:27Z
2,111
0
transformers
[ "transformers", "gguf", "trl", "text-generation-inference", "math", "science", "code", "v3.1", "stem", "en", "base_model:prithivMLmods/Capella-Qwen3-DS-V3.1-4B", "base_model:quantized:prithivMLmods/Capella-Qwen3-DS-V3.1-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-08T02:59:58Z
--- base_model: prithivMLmods/Capella-Qwen3-DS-V3.1-4B language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - trl - text-generation-inference - math - science - code - v3.1 - stem --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/prithivMLmods/Capella-Qwen3-DS-V3.1-4B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Capella-Qwen3-DS-V3.1-4B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q5_K_M.gguf) | Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q6_K.gguf) | Q6_K | 3.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.f16.gguf) | f16 | 8.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
aminLo/best-grade-model
aminLo
2025-09-22T18:00:47Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:google/flan-t5-base", "lora", "transformers", "base_model:google/flan-t5-base", "license:apache-2.0", "region:us" ]
null
2025-09-22T17:52:39Z
--- library_name: peft license: apache-2.0 base_model: google/flan-t5-base tags: - base_model:adapter:google/flan-t5-base - lora - transformers model-index: - name: best-grade-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # best-grade-model This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4312 | 0.8649 | 200 | 0.2963 | | 0.2898 | 1.7265 | 400 | 0.2353 | | 0.2282 | 2.5881 | 600 | 0.2004 | | 0.1997 | 3.4497 | 800 | 0.1907 | | 0.175 | 4.3114 | 1000 | 0.1886 | | 0.149 | 5.1730 | 1200 | 0.1806 | | 0.1538 | 6.0346 | 1400 | 0.1743 | | 0.148 | 6.8995 | 1600 | 0.1741 | | 0.1319 | 7.7611 | 1800 | 0.1721 | ### Framework versions - PEFT 0.17.1 - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758563978
poolkiltzn
2025-09-22T18:00:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vigilant alert tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-22T18:00:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vigilant alert tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF
mradermacher
2025-09-22T18:00:10Z
0
0
transformers
[ "transformers", "gguf", "programming", "code generation", "code", "coding", "coder", "chat", "brainstorm", "qwen", "qwen3", "qwencoder", "brainstorm 20x", "creative", "all uses cases", "Jan-V1", "float32", "horror", "32 bit precision", "science fiction", "fantasy", "Star Trek", "finetune", "thinking", "reasoning", "unsloth", "moe", "mixture of experts", "merge", "en", "dataset:progs2002/star-trek-tng-scripts", "dataset:DavidAU/horror-nightmare1", "base_model:DavidAU/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B", "base_model:quantized:DavidAU/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-22T13:31:08Z
--- base_model: DavidAU/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B datasets: - progs2002/star-trek-tng-scripts - DavidAU/horror-nightmare1 language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - programming - code generation - code - coding - coder - chat - code - chat - brainstorm - qwen - qwen3 - qwencoder - brainstorm 20x - creative - all uses cases - Jan-V1 - float32 - horror - 32 bit precision - science fiction - fantasy - Star Trek - finetune - thinking - reasoning - unsloth - moe - mixture of experts - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.8 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 6.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q4_0.gguf) | i1-Q4_0 | 6.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q4_1.gguf) | i1-Q4_1 | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-i1-GGUF/resolve/main/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B.i1-Q6_K.gguf) | i1-Q6_K | 8.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Oussama09D/PosteLLM
Oussama09D
2025-09-22T17:59:48Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T17:57:33Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmirMohseni/grpo-qwen2.5-7b-stem-lora
AmirMohseni
2025-09-22T17:58:07Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "grpo", "trl", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-21T10:17:49Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: grpo-qwen2.5-7b-stem-lora tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for grpo-qwen2.5-7b-stem-lora This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmirMohseni/grpo-qwen2.5-7b-stem-lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rl-research-team/grpo-math-training/runs/tamo2tpo) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.24.0.dev0 - Transformers: 4.56.2 - Pytorch: 2.8.0 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TAUR-dev/M-0921__0epoch_CT3and4arg_grpo-rl
TAUR-dev
2025-09-22T17:56:50Z
0
0
null
[ "safetensors", "qwen2", "en", "license:mit", "region:us" ]
null
2025-09-22T06:05:55Z
--- language: en license: mit --- # M-0921__0epoch_CT3and4arg_grpo-rl ## Model Details - **Training Method**: VeRL Reinforcement Learning (RL) - **Stage Name**: rl - **Experiment**: 0921__0epoch_CT3and4arg_grpo - **RL Framework**: VeRL (Versatile Reinforcement Learning) ## Training Configuration ## Experiment Tracking 🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__0921__0epoch_CT3and4arg_grpo__v1 ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-0921__0epoch_CT3and4arg_grpo-rl") model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-0921__0epoch_CT3and4arg_grpo-rl") ```
mradermacher/Qwen3-1.7B-luke-v1-GGUF
mradermacher
2025-09-22T17:48:19Z
1,098
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "luke-sft", "trl", "sft", "en", "dataset:lukedai/hehe", "base_model:lukedai/Qwen3-1.7B-luke-v1", "base_model:quantized:lukedai/Qwen3-1.7B-luke-v1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-15T07:05:40Z
--- base_model: lukedai/Qwen3-1.7B-luke-v1 datasets: lukedai/hehe language: - en library_name: transformers model_name: Qwen3-1.7B-luke-v1 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - luke-sft - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/lukedai/Qwen3-1.7B-luke-v1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-1.7B-luke-v1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.f16.gguf) | f16 | 3.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
winnieyangwannan/evwc_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_6_all_37_0.001_12800_5
winnieyangwannan
2025-09-22T17:47:26Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-22T17:45:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sidhantoon/Moji_v20
sidhantoon
2025-09-22T17:46:50Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-22T17:43:42Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
sabirjdjdjd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_lazy_prawn
sabirjdjdjd
2025-09-22T17:46:28Z
173
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am territorial_lazy_prawn", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-17T03:58:59Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am territorial_lazy_prawn --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/gemma270m-fiorentino-lora-GGUF
mradermacher
2025-09-22T17:43:23Z
153
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "sft", "trl", "en", "base_model:MrDave/gemma270m-fiorentino-lora", "base_model:quantized:MrDave/gemma270m-fiorentino-lora", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-19T12:33:29Z
--- base_model: MrDave/gemma270m-fiorentino-lora language: - en library_name: transformers model_name: gemma270m-fiorentino-lora mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - sft - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/MrDave/gemma270m-fiorentino-lora <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gemma270m-fiorentino-lora-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q5_K_M.gguf) | Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q6_K.gguf) | Q6_K | 0.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/gemma270m-fiorentino-lora-GGUF/resolve/main/gemma270m-fiorentino-lora.f16.gguf) | f16 | 0.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF
mradermacher
2025-09-22T17:43:00Z
748
0
transformers
[ "transformers", "gguf", "en", "base_model:kaonai/kaon-l-mistral-24b-v0.1", "base_model:quantized:kaonai/kaon-l-mistral-24b-v0.1", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-19T13:56:15Z
--- base_model: kaonai/kaon-l-mistral-24b-v0.1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/kaonai/kaon-l-mistral-24b-v0.1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#kaon-l-mistral-24b-v0.1-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/kaon-l-mistral-24b-v0.1-i1-GGUF/resolve/main/kaon-l-mistral-24b-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Slot-MLLM-7B-instruct-GGUF
mradermacher
2025-09-22T17:42:15Z
188
0
transformers
[ "transformers", "gguf", "en", "base_model:KU-AGI/Slot-MLLM-7B-instruct", "base_model:quantized:KU-AGI/Slot-MLLM-7B-instruct", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-09-20T09:23:37Z
--- base_model: KU-AGI/Slot-MLLM-7B-instruct language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/KU-AGI/Slot-MLLM-7B-instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Slot-MLLM-7B-instruct-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q2_K.gguf) | Q2_K | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q3_K_S.gguf) | Q3_K_S | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.IQ4_XS.gguf) | IQ4_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q6_K.gguf) | Q6_K | 5.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF/resolve/main/Slot-MLLM-7B-instruct.f16.gguf) | f16 | 13.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Slot-MLLM-14B-instruct-GGUF
mradermacher
2025-09-22T17:41:02Z
105
0
transformers
[ "transformers", "gguf", "en", "base_model:KU-AGI/Slot-MLLM-14B-instruct", "base_model:quantized:KU-AGI/Slot-MLLM-14B-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-21T14:09:29Z
--- base_model: KU-AGI/Slot-MLLM-14B-instruct language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/KU-AGI/Slot-MLLM-14B-instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Slot-MLLM-14B-instruct-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q3_K_L.gguf) | Q3_K_L | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q5_K_M.gguf) | Q5_K_M | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q6_K.gguf) | Q6_K | 12.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-14B-instruct-GGUF/resolve/main/Slot-MLLM-14B-instruct.Q8_0.gguf) | Q8_0 | 15.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Slot-MLLM-7B-instruct-i1-GGUF
mradermacher
2025-09-22T17:40:57Z
84
0
transformers
[ "transformers", "gguf", "en", "base_model:KU-AGI/Slot-MLLM-7B-instruct", "base_model:quantized:KU-AGI/Slot-MLLM-7B-instruct", "license:mit", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-09-21T14:23:14Z
--- base_model: KU-AGI/Slot-MLLM-7B-instruct language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/KU-AGI/Slot-MLLM-7B-instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Slot-MLLM-7B-instruct-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q4_1.gguf) | i1-Q4_1 | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Slot-MLLM-7B-instruct-i1-GGUF/resolve/main/Slot-MLLM-7B-instruct.i1-Q6_K.gguf) | i1-Q6_K | 5.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Ministral-8B-it-2410-iSMART-GGUF
mradermacher
2025-09-22T17:40:49Z
55
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "vi", "base_model:lefantom00/Ministral-8B-it-2410-iSMART", "base_model:quantized:lefantom00/Ministral-8B-it-2410-iSMART", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-21T15:23:58Z
--- base_model: lefantom00/Ministral-8B-it-2410-iSMART language: - en - vi library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/lefantom00/Ministral-8B-it-2410-iSMART <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Ministral-8B-it-2410-iSMART-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Ministral-8B-it-2410-iSMART-GGUF/resolve/main/Ministral-8B-it-2410-iSMART.f16.gguf) | f16 | 16.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/command-a-03-2025-uncut-GGUF
mradermacher
2025-09-22T17:40:27Z
14
0
transformers
[ "transformers", "gguf", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "dataset:jukofyork/instruction-refusals-500MB", "dataset:jukofyork/instruction-responses-500MB", "base_model:jukofyork/command-a-03-2025-uncut", "base_model:quantized:jukofyork/command-a-03-2025-uncut", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-21T21:49:31Z
--- base_model: jukofyork/command-a-03-2025-uncut datasets: - jukofyork/instruction-refusals-500MB - jukofyork/instruction-responses-500MB language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi library_name: transformers license: cc-by-nc-4.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/jukofyork/command-a-03-2025-uncut <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#command-a-03-2025-uncut-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/command-a-03-2025-uncut-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q2_K.gguf) | Q2_K | 42.2 | | | [GGUF](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q3_K_S.gguf) | Q3_K_S | 49.1 | | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q3_K_M.gguf.part2of2) | Q3_K_M | 54.5 | lower quality | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q3_K_L.gguf.part2of2) | Q3_K_L | 59.2 | | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.IQ4_XS.gguf.part2of2) | IQ4_XS | 60.7 | | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q4_K_S.gguf.part2of2) | Q4_K_S | 63.9 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q4_K_M.gguf.part2of2) | Q4_K_M | 67.2 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q5_K_S.gguf.part2of2) | Q5_K_S | 76.9 | | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q5_K_M.gguf.part2of2) | Q5_K_M | 78.9 | | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q6_K.gguf.part2of2) | Q6_K | 91.2 | very good quality | | [PART 1](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/command-a-03-2025-uncut-GGUF/resolve/main/command-a-03-2025-uncut.Q8_0.gguf.part3of3) | Q8_0 | 118.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/L3.3-70B-Amalgamma-V1-GGUF
mradermacher
2025-09-22T17:40:01Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Darkhn-Graveyard/L3.3-70B-Amalgamma-V1", "base_model:quantized:Darkhn-Graveyard/L3.3-70B-Amalgamma-V1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-22T06:40:52Z
--- base_model: Darkhn-Graveyard/L3.3-70B-Amalgamma-V1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Darkhn-Graveyard/L3.3-70B-Amalgamma-V1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#L3.3-70B-Amalgamma-V1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-70B-Amalgamma-V1-GGUF/resolve/main/L3.3-70B-Amalgamma-V1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Gilotopia/FLTest1
Gilotopia
2025-09-22T17:37:15Z
0
0
null
[ "license:other", "region:us" ]
null
2025-09-06T13:33:03Z
--- license: other license_name: all-rights-reserved-no-usage license_link: LICENSE ---
PranjalGoswami69/ruby
PranjalGoswami69
2025-09-22T17:33:01Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-22T17:09:34Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ruby --- # Ruby <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ruby` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ruby", "lora_weights": "https://huggingface.co/PranjalGoswami69/ruby/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('PranjalGoswami69/ruby', weight_name='lora.safetensors') image = pipeline('ruby').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/PranjalGoswami69/ruby/discussions) to add images that show off what you’ve made with this LoRA.
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922162156-epoch-1
vectorzhou
2025-09-22T17:31:57Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T17:31:28Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64 tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64 This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922162156-epoch-1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/y3rtsfjt) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu128 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MattBou00/llama-3-2-1b-detox_v1f_SCALE8_round4
MattBou00
2025-09-22T17:30:52Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T17:29:10Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_17-07-37/final-model") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_17-07-37/final-model") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_17-07-37/final-model") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
tralalerrotralala228/zoeymoon
tralalerrotralala228
2025-09-22T17:25:46Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-22T15:54:48Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: zoeymoon --- # Zoeymoon <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `zoeymoon` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "zoeymoon", "lora_weights": "https://huggingface.co/tralalerrotralala228/zoeymoon/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tralalerrotralala228/zoeymoon', weight_name='lora.safetensors') image = pipeline('zoeymoon').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/tralalerrotralala228/zoeymoon/discussions) to add images that show off what you’ve made with this LoRA.
tralalerrotralala228/sashablaze
tralalerrotralala228
2025-09-22T17:23:06Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-22T15:50:54Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: sashablaze --- # Sashablaze <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `sashablaze` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "sashablaze", "lora_weights": "https://huggingface.co/tralalerrotralala228/sashablaze/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tralalerrotralala228/sashablaze', weight_name='lora.safetensors') image = pipeline('sashablaze').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/tralalerrotralala228/sashablaze/discussions) to add images that show off what you’ve made with this LoRA.
tralalerrotralala228/jadestarr
tralalerrotralala228
2025-09-22T17:23:02Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-22T15:52:57Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: jadestarr --- # Jadestarr <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `jadestarr` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "jadestarr", "lora_weights": "https://huggingface.co/tralalerrotralala228/jadestarr/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tralalerrotralala228/jadestarr', weight_name='lora.safetensors') image = pipeline('jadestarr').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/tralalerrotralala228/jadestarr/discussions) to add images that show off what you’ve made with this LoRA.
MattBou00/llama-3-2-1b-detox_v1f_SCALE8_round4-checkpoint-epoch-60
MattBou00
2025-09-22T17:20:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T17:18:42Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_17-07-37/checkpoints/checkpoint-epoch-60") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_17-07-37/checkpoints/checkpoint-epoch-60") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_17-07-37/checkpoints/checkpoint-epoch-60") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
sujalappa/speaker-segmentation-fine-tuned
sujalappa
2025-09-22T17:19:51Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "pyannet", "speaker-diarization", "speaker-segmentation", "generated_from_trainer", "dataset:sujalappa/temp-speaker-diarization-synthetic-dataset", "base_model:pyannote/speaker-diarization-3.1", "base_model:finetune:pyannote/speaker-diarization-3.1", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-09-22T16:23:01Z
--- library_name: transformers license: mit base_model: pyannote/speaker-diarization-3.1 tags: - speaker-diarization - speaker-segmentation - generated_from_trainer datasets: - sujalappa/temp-speaker-diarization-synthetic-dataset model-index: - name: speaker-diarization-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speaker-diarization-fine-tuned This model is a fine-tuned version of [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/speaker-diarization-3.1) on the sujalappa/temp-speaker-diarization-synthetic-dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.0626 - Model Preparation Time: 0.0071 - Der: 0.0334 - False Alarm: 0.0059 - Missed Detection: 0.0120 - Confusion: 0.0155 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:----------------------:|:------:|:-----------:|:----------------:|:---------:| | 0.086 | 1.0 | 42 | 0.0856 | 0.0071 | 0.0517 | 0.0105 | 0.0207 | 0.0206 | | 0.0417 | 2.0 | 84 | 0.0677 | 0.0071 | 0.0415 | 0.0079 | 0.0153 | 0.0183 | | 0.0278 | 3.0 | 126 | 0.0653 | 0.0071 | 0.0368 | 0.0065 | 0.0132 | 0.0171 | | 0.0222 | 4.0 | 168 | 0.0638 | 0.0071 | 0.0340 | 0.0058 | 0.0120 | 0.0162 | | 0.0242 | 5.0 | 210 | 0.0626 | 0.0071 | 0.0334 | 0.0059 | 0.0120 | 0.0155 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
EpistemeAI/gps-oss-20b-finetuned_model
EpistemeAI
2025-09-22T17:19:27Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "gpt_oss", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-22T17:19:19Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** EpistemeAI - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
aamijar/Llama-2-7b-hf-dora-r8-mrpc-epochs4
aamijar
2025-09-22T17:17:17Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T17:17:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
haihp02/d473fe20-5de4-4222-8115-c1f4df15a0c3
haihp02
2025-09-22T17:07:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T15:27:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ByteMeHarder-404/basic_sentimentanalysis_finetuning_sst2
ByteMeHarder-404
2025-09-22T17:02:25Z
7
0
null
[ "tensorboard", "safetensors", "bert", "text-classification", "sentiment-analysis", "en", "dataset:glue", "region:us" ]
text-classification
2025-09-12T20:16:58Z
--- language: en datasets: - glue metrics: - accuracy model-name: bert-base-uncased-finetuned-sst2 tags: - text-classification - sentiment-analysis --- # BERT Base (uncased) fine-tuned on SST-2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the **GLUE SST-2** dataset for sentiment classification (positive vs. negative). ## Model Details - **Model type**: BERT (base, uncased) - **Fine-tuned on**: SST-2 (Stanford Sentiment Treebank) - **Labels**: - 0 → Negative - 1 → Positive - **Training framework**: [🤗 Transformers](https://github.com/huggingface/transformers) ## Training - Epochs: 2 - Batch size: 4 (with gradient accumulation steps = 4) - Learning rate: 3e-5 - Mixed precision: fp16 - Optimizer & Scheduler: Default Hugging Face Trainer ## Evaluation Results On the SST-2 validation set: | Epoch | Training Loss | Validation Loss | Accuracy | |-------|---------------|-----------------|----------| | 1 | 0.1761 | 0.2282 | 93.0% | | 2 | 0.1127 | 0.2701 | 93.1% | Final averaged training loss: **0.1663** ## How to Use ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model_name = "ByteMeHarder-404/bert-base-uncased-finetuned-sst2" tok = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) inputs = tok("I love Hugging Face!", return_tensors="pt") outputs = model(**inputs) pred = outputs.logits.argmax(dim=-1).item() print("Label:", pred) # 1 = Positive
mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF
mradermacher
2025-09-22T17:00:12Z
0
0
transformers
[ "transformers", "gguf", "writing", "creative-writing", "roleplay", "en", "base_model:allura-forge/Koto-Small-7B-IT-ThonkTokens", "base_model:quantized:allura-forge/Koto-Small-7B-IT-ThonkTokens", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-22T13:30:43Z
--- base_model: allura-forge/Koto-Small-7B-IT-ThonkTokens language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - writing - creative-writing - roleplay --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/allura-forge/Koto-Small-7B-IT-ThonkTokens <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Koto-Small-7B-IT-ThonkTokens-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ1_M.gguf) | i1-IQ1_M | 2.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ2_S.gguf) | i1-IQ2_S | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ3_M.gguf) | i1-IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-ThonkTokens-i1-GGUF/resolve/main/Koto-Small-7B-IT-ThonkTokens.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
aamijar/Llama-2-7b-hf-dora-r8-mrpc-epochs3
aamijar
2025-09-22T16:57:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T16:57:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Archief80/OSS.Phi
Archief80
2025-09-22T16:57:14Z
0
0
null
[ "gguf", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-22T16:10:00Z
--- license: other license_name: aa license_link: LICENSE ---
ziadtarek12/whisper-arabic-gulf-seed_168-peft
ziadtarek12
2025-09-22T16:56:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T16:56:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cha9itha/Mistral_7B_instruct_MCQ_Islamic
cha9itha
2025-09-22T16:47:41Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T16:39:09Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rajat24/whisper-tiny-finetuned
rajat24
2025-09-22T16:47:15Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-22T16:22:22Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer model-index: - name: whisper-tiny-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
RR32444/VLM-prompt01
RR32444
2025-09-22T16:47:04Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-22T16:46:58Z
--- base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** RR32444 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
samder03/2025-24679-image-autogluon-predictor
samder03
2025-09-22T16:46:04Z
0
0
null
[ "dataset:ecopus/sign_identification", "license:mit", "region:us" ]
null
2025-09-22T00:57:26Z
--- license: mit datasets: - ecopus/sign_identification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model is an image classifier that identifies images of stop signs. It is trained with Autogluon multimodal on the ecopus/sign_identification dataset. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model is an image classifier that identifies images of stop signs. It is trained with Autogluon multimodal on the ecopus/sign_identification dataset. - **Developed by:** Sam Der - **Model type:** AutoML (AutoGluon MultiModalPredictor with ResNet18 backbone) - **License:** MIT ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is intended to be used to distinguish stop signs from other street signs. ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - dataset: ecopus/sign_identification - splits: - original: 30 original images - augmented: 385 synthetic images ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - library: AutoGluon MultiModal - presets: "medium_quality" - backbone: timm_image → resnet18 #### Training Hyperparameters - presets="medium_quality" - hyperparameters={ "model.names": ["timm_image"], "model.timm_image.checkpoint_name": "resnet18", } ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> ecopus/sign_identification #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> - accuracy: fraction of correctly predicted labels - F1 (weighted): harmonic mean of precision and recall, weighted by class support ### Results accuracy: 1.0000 | weighted F1: 1.0000