DynamicVisualLearning-v2 - MLX Fine-tuned Vision Language Model

This model was fine-tuned using the VisualAI platform with MLX (Apple Silicon optimization).

πŸš€ Model Details

  • Base Model: mlx-community/SmolVLM-256M-Instruct-bf16
  • Training Platform: VisualAI (MLX-optimized)
  • GPU Type: MLX (Apple Silicon)
  • Training Job ID: 2
  • Created: 2025-06-03 03:29:58.843336
  • Training Completed: βœ… Yes

πŸ“Š Training Data

This model was trained on a combined dataset with visual examples and conversations.

πŸ› οΈ Usage

Installation

pip install mlx-vlm

Loading the Model

from mlx_vlm import load
import json
import os

# Load the base MLX model
model, processor = load("mlx-community/SmolVLM-256M-Instruct-bf16")

# Load the fine-tuned artifacts
model_info_path = "mlx_model_info.json"
if os.path.exists(model_info_path):
    with open(model_info_path, 'r') as f:
        model_info = json.load(f)
    print(f"βœ… Loaded fine-tuned model with {model_info.get('training_examples_count', 0)} training examples")

# Check for adapter weights
adapters_path = "adapters/adapter_config.json"
if os.path.exists(adapters_path):
    with open(adapters_path, 'r') as f:
        adapter_config = json.load(f)
    print(f"🎯 Found MLX adapters with {adapter_config.get('training_examples', 0)} training examples")

Inference

from mlx_vlm import generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
from PIL import Image

# Load your image
image = Image.open("your_image.jpg")

# Ask a question
question = "What type of brake component is this?"

# Format the prompt
config = load_config("mlx-community/SmolVLM-256M-Instruct-bf16")
formatted_prompt = apply_chat_template(processor, config, question, num_images=1)

# Generate response
response = generate(model, processor, formatted_prompt, [image], verbose=False, max_tokens=100)
print(f"Model response: {response}")

πŸ“ Model Artifacts

This repository contains:

  • mlx_model_info.json: Training metadata and learned mappings
  • training_images/: Reference images from training data
  • adapters/: MLX LoRA adapter weights and configuration (if available)
  • README.md: This documentation

⚠️ Important Notes

  • This model uses MLX format optimized for Apple Silicon
  • The actual model weights remain in the base model (mlx-community/SmolVLM-256M-Instruct-bf16)
  • The fine-tuning artifacts enhance the model's domain-specific knowledge
  • Check the adapters/ folder for MLX-specific fine-tuned weights
  • For best results, use on Apple Silicon devices (M1/M2/M3)

🎯 Training Statistics

  • Training Examples: 3
  • Learned Mappings: 2
  • Domain Keywords: 79

πŸ“ž Support

For questions about this model or the VisualAI platform, please refer to the training logs or contact support.


This model was trained using VisualAI's MLX-optimized training pipeline.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for truworthai/DynamicVisualLearning-v2-mlx