Add tool calling template for HF format
2
#63 opened 2 days ago
by
Frrosta
FSDP Training with Mistral-Small-3.1-24B-Instruct-2503 Model and DecoderLayer
#62 opened 5 days ago
by
ian00000
Unexpected PixtralProcessor use with Mistral-Small-3.1 on vLLM — text-only use case
#61 opened 6 days ago
by
AbasKhan

Consolidated safetensors
#60 opened 6 days ago
by
Aktsvigun

Removed redundancy in suggested system prompt
#59 opened 7 days ago
by
owao
Add chat_template to tokenizer_config
#58 opened 8 days ago
by
alexmarques
Create Mistral-Small-3.1-24B-Instruct-2503
#56 opened 9 days ago
by
dylanliao9191

Request: DOI
#55 opened 10 days ago
by
gmaterni
Address discrepancies in the languages supported by the Mistral Small 3.1 2503
1
2
#54 opened 10 days ago
by
fpaupier

chat template not working for tool calling
1
#52 opened 12 days ago
by
thies
[resolved] vllm nightly hf config
#51 opened 13 days ago
by
zaventh
Transformers Code Almost Works
#48 opened 14 days ago
by
binder11
Problem hosting the model using vllm
3
4
#45 opened 19 days ago
by
ShaoServient
FP8 Dynamic/W8A16 Quants Please
4
#44 opened 19 days ago
by
rjmehta
Speculative Decoding: I'd love to have a much smaller "companion model" (0.5B for example)
#43 opened 19 days ago
by
lstrozzi
model_card
#42 opened 20 days ago
by
Nahieli777777
Fix typos
1
#41 opened 22 days ago
by
sukrucildirr

Can you provide the finetune code?
#40 opened 22 days ago
by
jason500
Upload Gravity%2520Falls%2520Intro%2520x%2520playboi%2520carti%25203%205.mp3
1
#39 opened 23 days ago
by
Jasond6111
Can't determine properly which is greater between 9.9 and 9.11
2
10
#38 opened 24 days ago
by
sniffski
Add transformers snippet
#36 opened 25 days ago
by
merve

Please help with error: Mistral-Small is not running on MacOs with CPU M2 Silicon. With Assert Error
1
#34 opened 25 days ago
by
NickolasCh
Deployment on Amazon SageMaker Endpoint
#33 opened 25 days ago
by
dgallitelli

Request support for text-only inference in transformers (Mistral3ForCausalLM class)
5
#32 opened 26 days ago
by
alan925
update metadata
#31 opened 26 days ago
by
nickname100231
Quantized models with vision included?
12
#27 opened 27 days ago
by
geoad
Corrected vllm link in readme
#26 opened 27 days ago
by
riversnow
Regarding Video Understanding
1
#25 opened 27 days ago
by
fensz
Support tool calls with chat template
#24 opened 27 days ago
by
CISCai
FIX for the pip install vllm --ugrade --> pip install vllm --upgrade
#23 opened 27 days ago
by
rbgo

How do we use it with Transformers? can you give some sample code ?
9
#22 opened 28 days ago
by
rameshch
Local Installation Video and Testing on Vision, Coding, Math, Text - Step by Step
1
#21 opened 28 days ago
by
fahdmirzac

Visual Grounding
1
1
#20 opened 28 days ago
by
Maverick17

Mistral-small
#19 opened 28 days ago
by
Melkiss
Add chat template to tokenizer config
#18 opened 28 days ago
by
mrfakename

Mistral3ForConditionalGeneration has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0.
4
3
#16 opened 28 days ago
by
pedrojfb99
set model_max_length to the maximum length of model context (131072 tokens)
#15 opened 28 days ago
by
x0wllaar

Problem with `mistral3` when loading the model
7
#14 opened 28 days ago
by
r3lativo
Add chat_template to tokenizer_config.json
1
#11 opened 28 days ago
by
bethrezen

Can't wait for HF? try chatllm.cpp
2
6
#7 opened 28 days ago
by
J22
You did it again...
38
#4 opened 29 days ago
by
MrDevolver

HF Format?
33
41
#2 opened 29 days ago
by
bartowski
