YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
InternLM2-Chat-20B-ToxicRP - GGUF
- Model creator: https://huggingface.co/Aculi/
- Original model: https://huggingface.co/Aculi/InternLM2-Chat-20B-ToxicRP/
Original model description:
base_model:
- Fischerboot/InternLM2-ToxicRP-QLORA-4Bit library_name: transformers tags:
- mergekit
- merge
InternLM2-chat-20B-ToxicRP-QLORA-Merged
This Model was Finetuned by me, using the Machine Power of g4rg. Big Thanks to all people that helped me. Do whatever you want with this Model, just dont do anything illegal.
GGUF here: Aculi/InternLM2-Chat-20B-ToxicRP-GGUF
Have fun
This Model uses CHATML btw.
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
- output/intervitens_internlm2-limarp-chat-20b-2 + Fischerboot/InternLM2-ToxicRP-QLORA-4Bit
Configuration
The following YAML configuration was used to produce this model:
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 48]
model: output/intervitens_internlm2-limarp-chat-20b-2+Fischerboot/InternLM2-ToxicRP-QLORA-4Bit
- Downloads last month
- 18
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support