license: mit
language:
- en
- zh
metrics:
- accuracy
base_model:
- Qwen/Qwen3-32B
pipeline_tag: text-generation
library_name: transformers
tags:
- blockchain
- conversational
- web3
- qwen3
eval_results:
- task: domain-specific evaluation
dataset: DMindAI/DMind_Benchmark
metric: normalized web3 score
score: 77.44
model: DMind-1
model_rank: 1 / 24
Table of Contents
Introduction
The rapid growth of Web3 technologies—blockchain, DeFi, and smart contracts—demands specialized AI large language models (LLMs) with precise domain alignment and advanced reasoning capabilities. However, General-purpose LLMs often lack the domain-specific accuracy, nuanced reasoning, and instruction-following aligned with expert expectations.
To address these limitations, we introduce DMind-1, a domain-specialized LLM fine-tuned for the Web3 ecosystem via supervised instruction tuning and reinforcement learning from human feedback (RLHF). Built on a powerful base model, DMind-1 achieves strong improvements in task accuracy, content safety, and expert-aligned interaction, significantly surpassing general-purpose models. DMind-1 represents a robust foundation for intelligent agents in the Web3 ecosystem.
1. Model Overview
DMind-1
DMind-1 is a specialized Web3 expert model built on the Qwen3-32B base. Leveraging a state-of-the-art transformer architecture, it integrates deep domain knowledge through a novel two-stage fine-tuning pipeline, establishing its distinctive strengths in Web3-specific applications.
Key Points:
Comprehensive Domain Expertise Data: In the first stage, DMind-1 underwent Supervised Fine-Tuning (SFT) on 13,276 expert-curated knowledge items distilled from 32.7GB of Web3 documentation, covering 8 key subdomains including DeFi, tokenomics, governance, and smart contracts. These data points were extracted and structured by a team of domain experts to ensure both depth and accuracy. To enable efficient and scalable training, we employed Low-Rank Adaptation (LoRA) during the SFT stage, allowing DMind-1 to internalize specialized Web3 knowledge while preserving the general-language capabilities of its base model.
Reinforcement Learning from Human Feedback (RLHF) To further align the model with expert expectations in realistic interaction scenarios and accuracy, we implemented an RLHF phase composed of:
- Reward Model Training: We trained a domain-specific reward model using preference-ranked outputs collected from human experts across diverse Web3-specific question-answer and interaction scenarios. This model learned to assess which responses best reflect factual accuracy and expert-level reasoning in the Web3 domain.
- Policy Optimization with PPO: Building on the SFT model, we fine-tuned Qwen3-32B using Proximal Policy Optimization (PPO), guided by the trained reward model. The policy network was optimized based on feedback from simulated Web3 dialogue environments, while LoRA ensured resource-efficient parameter updates and significantly reduced compute and memory requirements. This dual-stage approach enabled efficient fine-tuning of a larger model on Web3-specific tasks while achieving high alignment with human intent.
Domain-Aligned Reasoning and Interaction: DMind-1 exhibits advanced web3-aligned reasoning and interactive capabilities in the following fields:
Natural Dialogue Fluency: Coherent, context-aware conversations on complex Web3 topics, with strong multi-turn consistency.
Complex Instruction Following: Reliable execution of multi-step instructions and conditional logic, supporting agent-driven workflows.
Safe and Compliant Content Generation: Outputs are aligned with domain-specific safety, ethics, and regulatory standards.
2. Evaluation Results
We evaluate DMind-1 using the DMind Benchmark, a domain-specific evaluation suite tailored to assess large language models in the Web3 context. The benchmark spans 1,917 expert-reviewed questions across nine critical categories—including Blockchain Fundamentals, Infrastructure, Smart Contracts, DeFi, DAO, NFT, Token Economics, Meme, and Security. It combines multiple-choice and subjective open-ended tasks, simulating real-world challenges and requiring deep contextual understanding, which provides a comprehensive assessment of both factual knowledge and advanced reasoning.
Under this rigorous evaluation, DMind-1 ranked 1st among 24 leading models, outperforming both proprietary (e.g., Grok-3) and open-source (e.g., DeepSeek-R1) LLMs. Notably, our distilled variant DMind-1-mini also performed strongly, ranking 2nd overall. This demonstrates the effectiveness of our compact distillation pipeline.
3. Use Cases
- Expert-Level Question & Answering: Provides accurate, context-aware answers on blockchain, DeFi, smart contracts, and related Web3 topics.
- Compliance-Aware Support: Assists in drafting or reviewing content within regulatory and legal contexts.
- Content Generation in Domain: Produces Web3-specific blog posts, documentation, and tutorials tailored to developers and users.
- DeFi Strategy Suggestions: Generates insights and recommendations for yield farming, liquidity provision, and portfolio strategies based on user-provided data.
- Risk Management: Suggests strategies aligned with user risk profiles for more informed decision-making in volatile markets.
4. Quickstart
4.1 Model Downloads
Model | Base Model | Download |
---|---|---|
DMind-1 | Qwen3-32B | Hugging Face Link |
DMind-1-mini | Qwen3-14B | Hugging Face Link |
4.2 OpenRouter API
You can access DMind-1 via the OpenRouter API. Simply specify the desired model in the model
field of your request payload.
API Endpoint:
https://openrouter.ai/api/v1/chat/completions
Authentication:
- Obtain your API key from OpenRouter
- Include it in the
Authorization
header asBearer YOUR_API_KEY
Model Identifiers:
dmind-1
— Full-size expert model
Example Request (Python):
import requests
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
data = {
"model": "dmind-1",
"messages": [
{"role": "user", "content": "Explain DeFi in simple terms."}
]
}
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions",
headers=headers,
json=data
)
print(response.json())
Example Request (cURL):
curl https://openrouter.ai/api/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "dmind-1",
"messages": [{"role": "user", "content": "What is a smart contract?"}]
}'
Notes:
- Replace
YOUR_API_KEY
with your actual OpenRouter API key. - Change the
model
field todmind-1
as needed. - Both models support the same API structure for easy integration.
4.3 OpenRouter Web Chat
You can try DMind-1 instantly using the OpenRouter Web Chat.
- Select your desired model from the dropdown menu (DMind-1).
- Enter your prompt and interact with the model in real time.
License
- The code repository and model weights for DMind-1 is released under the MIT License.
- Commercial use, modification, and derivative works (including distillation and fine-tuning) are permitted.
- Base Models:
- DMind-1 is derived from Qwen3-32B, originally licensed under the Qwen License.
- Please ensure compliance with the original base model licenses when using or distributing derivatives.
Contact
For questions or support, please contact [email protected]