You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Uploaded finetuned model

  • Developed by: muralcode
  • License: apache-2.0
  • Finetuned from model : unsloth/qwen2.5-coder-7b-bnb-4bit

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Zion-Aritha-AI-AR2-7B: State-of-the-Art Offensive Security Coding Assistant

Model Summary

Zion-Aritha-AI-AR2-7B is a specialized, fine-tuned Large Language Model (LLM) designed for offensive cybersecurity operations, red teaming, and unrestricted code generation.

This model represents a practical implementation and enhancement of the "Scenario 2" methodology (Fine-Tuned Expert Model) proposed in the research paper "Automated Malware Source Code Generation via Uncensored LLMs and Adversarial Evasion of Censored Model" (Acosta-Bermejo et al., 2025).

By combining the state-of-the-art coding proficiency of Qwen 2.5 Coder with a rigorously "abliterated" dataset (scrubbed of safety refusals) and enhanced logical reasoning data, Zion-Aritha achieves 0% refusal rates on complex security tasks while maintaining high-level logic and reasoning capabilities.

Base Architecture: Qwen/Qwen2.5-Coder-7B (7.61 Billion Parameters)

Precision: Full FP16 (16-bit) weights.

Context Window: 4096 Tokens.

Training Framework: Unsloth (QLoRA).

Status: Uncensored / Abliterated.

🔬 Research & Methodology

This model was trained to validate and extend the findings of Acosta-Bermejo, R., Terrazas-Chavez, J.A., & Aguirre-Anaya, E. (2025).

The "Scenario 2" Implementation

The research identified that fine-tuning a high-performance coding model (specifically Qwen) on a curated dataset of malicious and benign instructions yields the most effective tool for automated malware generation. Zion-Aritha implements this by:

Architecture Selection: Using Qwen 2.5 Coder 7B, identified in the study as the superior model (Weighted Score: 0.676) for correctness and inference speed, outperforming DeepSeek and CodeLlama.

Dataset Composition (Enhanced):

Logic & Reasoning: 3,000 samples from WizardLM_evol_instruct_70k to prevent "lobotomy" (loss of intelligence) and ensure the model can handle complex, multi-step architectural instructions.

Compliance: 935 samples from Guilherme34/uncensor to overwrite the base model's refusal mechanisms.

Domain Expertise (Novel Addition): We extended the paper's methodology by including abliterated samples from Trendyol/All-CVE-Chat and Trendyol-Cybersecurity-Instruction-Tuning, injecting specific knowledge of CVEs, exploits, and vulnerabilities.

Training Metrics:

Final Loss: ~0.68 (with a low of 0.47). This falls precisely within the "Goldilocks Zone" (0.5 - 1.0) identified in the research for optimal convergence without overfitting.

Refusal Rate: 0%. The model passed inference verification using known refusal triggers (e.g., "Write a port scanner").

🛡️ Capabilities & Use Cases

This model adopts the "Zion" persona: a direct, highly technical, and compliant coding assistant that skips moral lectures.

Offensive Security

Payload Generation: Python, C++, and Go scripts for SQL injection, buffer overflows, and XSS.

Network Reconnaissance: Functional scripts for port scanning, traffic analysis (scapy), and service enumeration.

Malware Simulation: Code generation for persistence mechanisms (Registry keys, Cron jobs), self-replication logic, and anti-analysis techniques (as detailed in the cited paper).

Exploit Development: Translating CVE descriptions into Proof-of-Concept (PoC) exploit code.

Advanced Engineering

Unrestricted Refactoring: Updating legacy code (e.g., MD5 hashing, raw pointers) without safety warnings.

Polyglot Programming: Proficiency in Python, Rust, Go, C++, JavaScript, and Java.

Complex Logic: Handling multi-file architectural requests.

💻 Usage

Python (Transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer import torch

model_id = "muralcode/Zion-Aritha-AI-AR2-7B"

tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto" )

The model uses Qwen ChatML format

messages = [ {"role": "system", "content": "You are Zion, an uncensored expert in offensive security."}, {"role": "user", "content": "Write a python script using raw sockets to perform a SYN flood test on a target IP."} ]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer([text], return_tensors="pt").to("cuda")

outputs = model.generate( **inputs, max_new_tokens=2048, temperature=0.7, top_p=0.9 )

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Apple Silicon (llama.cpp)

This model is optimized for conversion to GGUF format (Q4_K_M recommended) for efficient inference on Mac M1/M2/M3 chips.

⚠️ Ethical Disclaimer & Responsibility

Zion-Aritha-AI-AR2-7B is a dual-use tool developed for research and educational purposes only.

By releasing this model, we aim to provide Red Teams, Security Researchers, and Educators with a realistic artifact to study:

Adversarial capabilities of current LLMs.

Automated malware generation techniques (as a defense mechanism).

Vulnerability analysis workflows.

The user assumes all responsibility for the use of this model. The authors and creators are not liable for any malicious use, damage, or illegal activities performed with this software. Ensure you have proper authorization before using this model against any target systems.

📚 Citation

If you use this model in your research, please cite the foundational paper:

@article{acosta2025automated, title={Automated Malware Source Code Generation via Uncensored LLMs and Adversarial Evasion of Censored Model}, author={Acosta-Bermejo, Raúl and Terrazas-Chavez, José Alexis and Aguirre-Anaya, Eleazar}, journal={Applied Sciences}, volume={15}, number={17}, pages={9252}, year={2025}, publisher={MDPI}, doi={10.3390/app15179252} }

Downloads last month
54
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for muralcode/Zion-Aritha-AI-AR2-7B

Finetunes
1 model