Llama-3.1-FoundationAI-SecurityLLM-8B-Instruct Technical Report
Abstract
Foundation-Sec-8B-Instruct is a cybersecurity-focused LLM designed for chat-style interactions and instruction-following, outperforming other models in cybersecurity tasks while matching their instruction-following capabilities.
Large language models (LLMs) have shown remarkable success across many domains, yet their integration into cybersecurity applications remains limited due to a lack of general-purpose cybersecurity data, representational complexity, and safety and regulatory concerns. To address this gap, we previously introduced Foundation-Sec-8B, a cybersecurity-focused LLM suitable for fine-tuning on downstream tasks. That model, however, was not designed for chat-style interactions or instruction-following. In this report, we release Foundation-Sec-8B-Instruct: a model specifically trained for general-purpose cybersecurity dialogue. Built on Foundation-Sec-8B, it combines domain-specific knowledge with instruction-following, conversational capabilities, and alignment with human preferences to produce high-quality, relevant responses. Comprehensive evaluations show that Foundation-Sec-8B-Instruct outperforms Llama 3.1-8B-Instruct on a range of cybersecurity tasks while matching its instruction-following performance. It is also competitive with GPT-4o-mini on cyber threat intelligence and instruction-following tasks. We envision Foundation-Sec-8B-Instruct becoming an indispensable assistant in the daily workflows of cybersecurity professionals. We release the model publicly at https://huggingface.co/fdtn-ai/Foundation-Sec-8B-Instruct.
Community
This paper introduces Foundation-Sec-8B-Instruct, a cybersecurity-focused instruction tuned LLM based on the Foundation-Sec-8B base model. Evaluation demonstrates comparable performance to larger models on security-specific tasks. The model is a publicly released open-weights model to support more AI adoption within cybersecurity contexts (https://huggingface.co/fdtn-ai/Foundation-Sec-8B-Instruct).
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Less Data, More Security: Advancing Cybersecurity LLMs Specialization via Resource-Efficient Domain-Adaptive Continuous Pre-training with Minimal Tokens (2025)
- PurpCode: Reasoning for Safer Code Generation (2025)
- ARMOR: Aligning Secure and Safe Large Language Models via Meticulous Reasoning (2025)
- Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks (2025)
- RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards (2025)
- MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts? (2025)
- From Prompt Injections to Protocol Exploits: Threats in LLM-Powered AI Agents Workflows (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/llama-31-foundationai-securityllm-8b-instruct-technical-report
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper