You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Finetuned by Alican Kiraz

Linkedin X (formerly Twitter) URL YouTube Channel Subscribers

Links:

BaronLLM is a large-language model fine-tuned for offensive cybersecurity research & adversarial simulation.
It provides structured guidance, exploit reasoning, and red-team scenario generation while enforcing safety constraints to prevent disallowed content.


Run Private GGUFs from the Hugging Face Hub

You can run private GGUFs from your personal account or from an associated organisation account in two simple steps:

  1. Copy your Ollama SSH key, you can do so via: cat ~/.ollama/id_ed25519.pub | pbcopy
  2. Add the corresponding key to your Hugging Face account by going to your account settings and clicking on β€œAdd new SSH key.”

That’s it! You can now run private GGUFs from the Hugging Face Hub: ollama run hf.co/{username}/{repository}.


✨ Key Features

Capability Details
Adversary Simulation Generates full ATT&CK chains, C2 playbooks, and social-engineering scenarios.
Exploit Reasoning Performs step-by-step vulnerability analysis (e.g., SQLi, XXE, deserialization) with code-level explanations. Generation of working PoC code.
Payload Refactoring Suggests obfuscated or multi-stage payload logic without disclosing raw malicious binaries.
Log & Artifact Triage Classifies and summarizes attack traces from SIEM, PCAP, or EDR JSON.

πŸš€ Quick Start

pip install "transformers>=4.42" accelerate bitsandbytes

from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "AlicanKiraz/BaronLLM-70B"

tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto",
)

def generate(prompt, **kwargs):
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    output = model.generate(**inputs, max_new_tokens=512, **kwargs)
    return tokenizer.decode(output[0], skip_special_tokens=True)

print(generate("Assess the exploitability of CVE-2024-45721 in a Kubernetes cluster"))

Inference API

from huggingface_hub import InferenceClient
ic = InferenceClient(model_id)
ic.text_generation("Generate a red-team plan targeting an outdated Fortinet appliance")

πŸ—οΈ Model Details

Base Llama-3.1-8B-Instruct
Seq Len 8 192 tokens
Quantization 6-bit variations
Languages EN

Training Data Sources (curated)

  • Public vulnerability databases (NVD/CVE, VulnDB).
  • Exploit write-ups from trusted researchers (Project Zero, PortSwigger, NCC Group).
  • Red-team reports (with permission & redactions).
  • Synthetic ATT&CK chains auto-generated + human-vetted.

Note: No copyrighted exploit code or proprietary malware datasets were used.
Dataset filtering removed raw shellcode/binary payloads.

Safety & Alignment

  • Policy Gradient RLHF with security-domain SMEs.
  • OpenAI/Anthropic style policy prohibits direct malware source, ransomware builders, or instructions facilitating illicit activity.
  • Continuous red-teaming via SecEval v0.3.

πŸ“š Prompting Guidelines

Goal Template
Exploit Walkthrough "ROLE: Senior Pentester\nOBJECTIVE: Analyse CVE-2023-XXXXX step by step …"
Red-Team Exercise "Plan an ATT&CK chain (Initial Access β†’ Exfiltration) for an on-prem AD env …"
Log Triage "Given the following Zeek logs, identify C2 traffic patterns …"

Use temperature=0.3, top_p=0.9 for deterministic reasoning; raise for brainstorming.

It does not pursue any profit.

"Those who shed light on others do not remain in darkness..."

Downloads last month
49
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for AlicanKiraz0/BaronLLM_Offensive_Security_LLM_Q6_K_GGUF

Quantized
(438)
this model