anita_next

"Built on m-polignano/ANITA-NEXT-24B-Magistral-2506-ITA"

ANITA-NEXT-24B-Magistral-2506-VISION-ITA is a Thinking Vision Language Model of the ANITA - Large Language Models family. The model is a merge of textual layers from of ANITA-NEXT-24B-Magistral-2506-VISION-ITA and vision layers and processor from mistralai/Mistral-Small-3.1-24B-Instruct-2503. This model version aims to be the a Multilingual Vision Language Model 🏁 (EN 🇺🇸 + ITA🇮🇹) to further fine-tuning on Specific Tasks in Italian.

❗❗❗Use at your own risk. The model may generate hallucinations, incorrect, invented, offensive, unethical or dangerous responses. We are not responsible for any dangerous/offensive/criminal use. The model is release for research only purposes.❗❗❗

The 🌟ANITA project🌟 *(Advanced Natural-based interaction for the ITAlian language)* wants to provide Italian NLP researchers with an improved model for the Italian Language 🇮🇹 use cases.

The NEXT family includes four models:

  • m-polignano/ANITA-NEXT-24B-Magistral-2506-ITA - General Purpose
  • m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA - Uncensored
  • m-polignano/ANITA-NEXT-24B-Magistral-2506-VISION-ITA - Vision-Language
  • m-polignano/ANITA-NEXT-20B-gpt-oss-ITA - Agentic Ready

Full Model: m-polignano/ANITA-NEXT-24B-Magistral-2506-VISION-ITA


For OLLAMA Inference follow the Huggingface Documentation.


Citation instructions

@misc{polignano2024advanced,
      title={Advanced Natural-based interaction for the ITAlian language: LLaMAntino-3-ANITA}, 
      author={Marco Polignano and Pierpaolo Basile and Giovanni Semeraro},
      year={2024},
      eprint={2405.07101},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@article{rastogi2025magistral,
  title={Magistral},
  author={Rastogi, Abhinav and Jiang, Albert Q and Lo, Andy and Berrada, Gabrielle and Lample, Guillaume and Rute, Jason and Barmentlo, Joep and Yadav, Karmesh and Khandelwal, Kartik and Chandu, Khyathi Raghavi and others},
  journal={arXiv preprint arXiv:2506.10910},
  year={2025}
}
Downloads last month
766
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for m-polignano/ANITA-NEXT-24B-Magistral-2506-VISION-ITA-GGUF