Sniff-0.6B by Noumenon Labs

Sniff-0.6B is an AI-generated text detection model built by Noumenon Labs, fine-tuned from Qwen3-0.6B. Itโ€™s trained to classify text as either AI-Generated or Human-Written.

Sniff-0.6B achieves 76.2% accuracy on our internal benchmark of 500 mixed samples. However, its performance tells a specific story:

  • AI Recall: 1.00 โ€“ The model catches every single AI-generated text.
  • Human Precision: 1.00 โ€“ When it predicts โ€œHuman-Written,โ€ it is always correct.
  • But...
    • Human Recall is only 0.58 โ€“ 42% of human-written texts are incorrectly flagged as AI.
    • AI Precision is 0.65 โ€“ 35% of texts flagged as AI were actually written by humans.

Interpretation

Sniff is highly conservative. It rarely makes false negatives (it wonโ€™t miss AI), but it generates many false positives (flagging human texts as AI). This behavior is useful in low-risk environments where it's better to overflag than underflag โ€” such as filtering bots or moderation tasks.

However, Sniff is not recommended for high-stakes use cases like education or academic integrity tools, where a single false accusation can have serious consequences.


Classification Report

          CLASSIFICATION REPORT
==================================================
Overall Accuracy: 0.7619

               precision    recall  f1-score   support

 AI-Generated       0.65      1.00      0.78        82
Human-Written       1.00      0.58      0.73       107

     accuracy                           0.76       189
    macro avg       0.82      0.79      0.76       189
 weighted avg       0.85      0.76      0.76       189

Model Use Case Recommendation

Goal Fit
Flagging suspected AI content in forums โœ…
Pre-filtering submissions for human review โœ…
Detecting academic dishonesty โŒ
Certifying authorship or originality โŒ

Next Steps for Future Versions

  • Improve human-text recall by increasing diversity and complexity in training data.
  • Balance aggressive detection with higher tolerance for creative or simple human writing.
  • Explore prompt tuning and deeper fine-tuning to soften the rigid behavior.
Downloads last month
26
Safetensors
Model size
596M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noumenon-labs/Sniff-0.6B

Finetuned
(289)
this model
Quantizations
1 model