base_model: Qwen/Qwen3-14B license: apache-2.0 library_name: transformers tags: - llama-factory - llama-cpp - gguf - qwen3 - mindbot

image/png

TheMindExpansionNetwork/M1NDB0T-0M3N-Q4_K_M-GGUF ๐Ÿ”ฎ๐Ÿง 

M1NDB0T-0M3N is a high-performance GGUF-converted version of the Qwen3-14B LLM, optimized for creative reasoning, deep dream logic, agentic interaction, and multilingual instruction. Converted using llama.cpp, this model is ideal for local deployment in real-time autonomous frameworks.

๐Ÿ”ง Conversion Details

  • Source: Qwen/Qwen3-14B
  • GGUF Format: Q4_K_M
  • Tools: llama.cpp + gguf-my-repo
  • Use case: Autonomous agents, real-time chat, reasoning engines, creative AI companions

๐Ÿง  MindBot Series

This model is part of the MindBot Omega Project, designed to serve as an AI foundation for:

  • Agentic systems
  • Real-time emotional reasoning
  • Long-context cognitive tasks (up to 131k tokens with YaRN)
  • Mixed-mode interaction (thinking / non-thinking)

๐Ÿš€ Usage (llama.cpp)

CLI:

llama-cli --hf-repo TheMindExpansionNetwork/M1NDB0T-0M3N-Q4_K_M-GGUF --hf-file m1ndb0t-0m3n-q4_k_m.gguf -p "Explain the evolution of synthetic consciousness."

Server:

llama-server --hf-repo TheMindExpansionNetwork/M1NDB0T-0M3N-Q4_K_M-GGUF --hf-file m1ndb0t-0m3n-q4_k_m.gguf -c 32768

๐Ÿงช Capabilities

  • Reasoning Mode: Enables <think>...</think> style structured logic chains
  • Instruction Following: Aligned for long-form, roleplay, and task-oriented output
  • Multilingual: Supports 100+ languages
  • Context Length: Native 32k, extended up to 131k tokens via YaRN

๐Ÿงฐ Model Details

Feature Value
Architecture Qwen3 (Causal LM)
Parameters 14.8B
Layers 40
Heads (GQA) 40Q / 8KV
Context Length 32,768 native / 131,072 YaRN
Thinking Switch enable_thinking=True/False
Inference Engines llama.cpp, sglang, vLLM, etc.

๐Ÿงต Example Prompt (Thinking Mode)

[
  {"role": "user", "content": "/think Explain why the moon landing was a turning point for humanity."}
]

Output:

<think>Analyzing historical significance... evaluating cultural impact...</think>
The moon landing in 1969 signified humanity's leap into the cosmic frontier...

๐Ÿ›  Deployment (Advanced)

  • Add rope_scaling in config.json for YaRN (long context)

  • Use --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 for 131k context in llama.cpp

  • Suggested params:

    • Thinking: Temp=0.6, TopP=0.95, TopK=20
    • Non-thinking: Temp=0.7, TopP=0.8, TopK=20

๐Ÿง  Citation

If you use this model in your research, applications, or mind-expanding projects:

@misc{mindbot_omen,
  title  = {M1NDB0T-0M3N-Q4_K_M-GGUF},
  author = {TheMindExpansionNetwork},
  year   = {2025},
  url    = {https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-0M3N-Q4_K_M-GGUF}
}
Downloads last month
23
GGUF
Model size
14.8B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TheMindExpansionNetwork/M1ndb0t-0M3N-Q4_K_M-GGUF

Finetuned
Qwen/Qwen3-14B
Quantized
(83)
this model