Navi
A high-performance, uncensored language model fine-tuned for cybersecurity applications.
Table of Contents
Model Details
This model is built upon bartowski/Llama-3.2-3B-Instruct-uncensored-GGUF, leveraging its capabilities for text generation in the cybersecurity domain.
Instructions
Linux/Mac Instructions
To run the model locally:
- Download the 
navi.llamafile - Open a terminal and navigate to the download directory.
 - Run the model using 
./navi.llamafile. 
Web UI
For a web interface:
- Follow the steps above.
 - Run the model with 
./navi.llamafile --server --v2. 
Windows Instructions
- Download the 
navi.llamafile - Head over to your downloads folder
 - Find 
navi.llamafileand right click on it - Rename the file to 
navi.llamafile.exe - Double click on the file
- From here it should launch a terminal window and load the model / cli chat interface.
 
 
Windows Web UI
Following the instructions above from 1 - 4
- Right click any open space and select open terminal 
- Alternatively open a terminal anywhere and navigate to wherever the 
navi.llamafile.exeis. 
 - Alternatively open a terminal anywhere and navigate to wherever the 
 - Once found type 
.\navi.llamafile.exe --server --v2to launch the included webserver - Open up a webbrowser and navigate to: localhost:8080
 
NOTE: This system has been tested on windows 11 as well as Ubuntu 24.04 LTS
- Downloads last month
 - 10
 
Model tree for saintssec/navi.llamafile
Base model
chuanli11/Llama-3.2-3B-Instruct-uncensored