Qwen3 Bifrost SOL 4B

This fine-tuned variant of the Qwen3 4B model was supervised fine-tuned on blockchain-specific datasets(Bifrost-AI/Solana-Vanguard-Challenge), optimized for downstream tasks in blockchain coding and smart contract development on the Solana ecosystem.

The Solana Vanguard Challenge dataset, comprising 1,000 diverse and in-depth questions, offers full-spectrum coverage of the Solana ecosystem. It spans fundamental blockchain concepts, advanced on-chain programming in Rust and the Anchor framework, client-side integration in TypeScript, detailed security strategies, and performance as well as regulatory considerations.

Qwen3 Bifrost SOL 4B is in active development with additional fine-tuning sessions, & benchmark statistics coming soon!

Provided Quants

Link Type Size/GB Notes
GGUF IQ1_S 1.1 very low quality
GGUF IQ1_M 1.2 very low quality
GGUF TQ1_0 1.2 very low quality
GGUF IQ2_S 1.4 fast, lower quality
GGUF Q2_K 1.6 fast, lower quality
GGUF Q4_K_M 2.5 fast, recommended
GGUF Q4_K_S 2.3 fast, recommended
GGUF Q4_0 2.3 fast, recommended
GGUF Q5_K_S 2.7
GGUF Q5_K_M 2.8
GGUF Q6_K 3.1 very good quality
GGUF Q8_0 4.0 fast, best quality
GGUF F16 7.7 16 bpw, highest quality

Training Session:

  • Time: 11 hours & 22 minutes
  • GPU: NVIDIA GeForce RTX 3090
  • Batches: 1000
  • Context-Size: 2043
  • Batch-size: 1
  • Learning-rate: 2e-5
  • Training-loss: 1.06
  • Eval-loss: 0.81

Dataset Composition

  • Total Questions: 1,000
  • Languages Covered:
    • Rust: On-chain smart contract development, security best practices, advanced state management, CPIs, PDAs, and more.
    • TypeScript: Client-side integration using @solana/web3.js, wallet adapters, Metaplex for NFT protocols, dynamic transaction composition, and front-end dApp development.
  • Planned Extensions:
    • C# (Solnet): To be integrated later for .NET ecosystem coverage.

Disclaimer

We do not recommend using Qwen3 Bifrost SOL 4B in commercial or real-world applications without further testing and development. This current model(v1) is intended for research and development purposes. While efforts have been made to align it using SFT and DPO, it may still produce outputs that are unexpected, biased, or inaccurate. Please use responsibly.

Downloads last month
185
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Bifrost-AI/Qwen3-Bifrost-SOL-4B-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(3)
this model

Dataset used to train Bifrost-AI/Qwen3-Bifrost-SOL-4B-GGUF