--- license: mit base_model: - Qwen/QwQ-32B pipeline_tag: text-generation library_name: transformers tags: - cybersecurity - ethicalhacking - informationsecurity - pentest - code - applicationsecurity --- Finetuned by Alican Kiraz [![Linkedin](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://tr.linkedin.com/in/alican-kiraz) ![X (formerly Twitter) URL](https://img.shields.io/twitter/url?url=https%3A%2F%2Fx.com%2FAlicanKiraz0) ![YouTube Channel Subscribers](https://img.shields.io/youtube/channel/subscribers/UCEAiUT9FMFemDtcKo9G9nUQ) Links: - Medium: https://alican-kiraz1.medium.com/ - Linkedin: https://tr.linkedin.com/in/alican-kiraz - X: https://x.com/AlicanKiraz0 - YouTube: https://youtube.com/@alicankiraz0 With the release of the new Qwen QwQ-32B, I quickly began training SenecaLLM v1.4 based on this model. During training: * About 30 hours on BF16 with 4×H200 **It does not pursue any profit.** With the new dataset I’ve prepared, it can produce quite good outputs in the following areas: * Information Security v1.5 * Incident Response v1.3.1 * Threat Hunting v1.3.2 * Ethical Exploit Development v2.0 * Purple Team Tactics v1.3 * Reverse Engineering v2.0 "Those who shed light on others do not remain in darkness..."