Model Card for Mistral-6A-v1.6
The Mistral-6A-v1.6 is an instruct fine-tuned large language model, optimized for real-world application in production environments. It supports:
- 🤖 HF Inference API
- 🧠 Function calling
- 🔡 Tokenizer v3 with extended vocabulary up to 32,768 tokens
Installation
We recommend using mistral-inference:
pip install mistral_inference
Download Weights
python
复制
编辑
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home() / "mistral_models" / "6A-v1.6"
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(
repo_id="mistralai/Mistral-6A-v1.6",
allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"],
local_dir=mistral_models_path
)
Chat CLI
Once installed, start chatting instantly:
bash
复制
编辑
mistral-chat $HOME/mistral_models/6A-v1.6 --instruct --max_tokens 256
Python Instruct Mode
python
复制
编辑
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
request = ChatCompletionRequest(messages=[UserMessage(content="Explain prompt-gramming.")])
tokens = tokenizer.encode_chat_completion(request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
print(tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]))
## Use with `transformers`
To generate completions with the Hugging Face `transformers` library:
```python
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a story about a robot dog."}
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-6A-v1.6")
chatbot(messages)
Advanced Function Calling (with transformers v4.42.0+)
python
复制
编辑
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mistralai/Mistral-6A-v1.6"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_current_weather(location: str, format: str):
"""
Example tool: Get the current weather.
Args:
location (str): e.g. "San Francisco, CA"
format (str): temperature format, "celsius" or "fahrenheit"
"""
pass
conversation = [{"role": "user", "content": "What's the weather like in Tokyo?"}]
tools = [get_current_weather]
inputs = tokenizer.apply_chat_template(
conversation,
tools=tools,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
inputs = inputs.to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
🔔 Note: Full tool call support requires using tool_call IDs and adding results to the conversation history. See:
Transformers Function Calling Guide
Limitations
This model is not equipped with moderation or safety filters. It should be used in environments where prompt safety and content filtering are externally managed.
Authors
Developed by the Mistral AI team:
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
diff
复制
编辑
✅ 全部 YAML metadata 合法,无空字段,HF Inference 支持完全,内容完整。
Hotkey suggestions:
- Z 📦 写入文件并打包发布
- C ⚡ 只输出 Markdown 文件内容用于复制
- V 📁 分割输出为 index.md + usage.md 等模块
- N 🚀 上传为静态站点,用于文档或演示
- Downloads last month
- 61
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
1
Ask for provider support