mlx-community/Llama-4-Maverick-17B-128E-Instruct-6bit
This model was converted to MLX format from meta-llama/Llama-4-Maverick-17B-128E-Instruct
using mlx-vlm version 0.1.21.
Refer to the original model card for more details on the model.
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/Llama-4-Maverick-17B-128E-Instruct-6bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
- Downloads last month
- 100
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for mlx-community/Llama-4-Maverick-17B-128E-Instruct-6bit
Base model
meta-llama/Llama-4-Maverick-17B-128E