AI Racing Game Model
This is a neural network trained to play a racing game using behavioral cloning. The model learns from expert demonstrations to make driving decisions (left, stay, right) based on the current game state.
Model Details
- Model Type: Feed-forward Neural Network
- Framework: TensorFlow/Keras
- Training Method: Behavioral Cloning (Supervised Learning)
- Input: 9-dimensional state vector
- Output: 3-dimensional action probabilities
Training Data
- Total Samples: 75,000
- Data Source: Synthetic expert demonstrations
- Difficulty Levels: Progressive (0.5x to 1.5x)
- Training Method: Supervised learning on expert actions
Model Architecture
Input Layer: 9 features
Hidden Layer 1: 64 neurons (ReLU + BatchNorm + Dropout 0.3)
Hidden Layer 2: 32 neurons (ReLU + BatchNorm + Dropout 0.2)
Hidden Layer 3: 16 neurons (ReLU + Dropout 0.1)
Output Layer: 3 neurons (Softmax)
Performance
- Test Accuracy: 0.9879
- Test Loss: 0.0471
- Weighted F1-Score: 0.9861
Per-Class Metrics
- Left Action: Precision: 0.901, Recall: 0.471, F1: 0.618
- Stay Action: Precision: 0.989, Recall: 1.000, F1: 0.994
- Right Action: Precision: 0.973, Recall: 0.545, F1: 0.699
Usage
TensorFlow.js (Web)
// Load the model
const model = await tf.loadLayersModel('model.json');
// Prepare input (9-dimensional array)
const gameState = tf.tensor2d([[
lane0_near, lane0_far, // Lane 0 sensors
lane1_near, lane1_far, // Lane 1 sensors
lane2_near, lane2_far, // Lane 2 sensors
current_lane_norm, // Current lane (0-1)
progress_norm, // Game progress (0-1)
speed_factor // Speed factor
]], [1, 9]);
// Get prediction
const prediction = model.predict(gameState);
const actionProbs = await prediction.data();
// Choose action (0=Left, 1=Stay, 2=Right)
const action = actionProbs.indexOf(Math.max(...actionProbs));
Python (TensorFlow)
import tensorflow as tf
import numpy as np
# Load the model
model = tf.keras.models.load_model('racing_model.keras')
# Prepare input
game_state = np.array([[
lane0_near, lane0_far,
lane1_near, lane1_far,
lane2_near, lane2_far,
current_lane_norm,
progress_norm,
speed_factor
]])
# Get prediction
action_probs = model.predict(game_state)[0]
action = np.argmax(action_probs) # 0=Left, 1=Stay, 2=Right
Input Format
The model expects a 9-dimensional input vector:
- lane0_near (0-1): Near obstacle sensor for left lane
- lane0_far (0-1): Far obstacle sensor for left lane
- lane1_near (0-1): Near obstacle sensor for middle lane
- lane1_far (0-1): Far obstacle sensor for middle lane
- lane2_near (0-1): Near obstacle sensor for right lane
- lane2_far (0-1): Far obstacle sensor for right lane
- current_lane_norm (0-1): Current lane position normalized
- progress_norm (0-1): Game progress/score normalized
- speed_factor (0-1): Current game speed factor
Output Format
The model outputs 3 probability values:
- Index 0: Probability of moving left
- Index 1: Probability of staying in current lane
- Index 2: Probability of moving right
Files Included
model.json
+*.bin
: TensorFlow.js model filesracing_model.keras
: Native Keras modelmetadata.json
: Model metadata and training infotraining_history.png
: Training progress visualization
Training Details
- Epochs: 30
- Batch Size: 64
- Optimizer: Adam (lr=0.001)
- Loss Function: Categorical Crossentropy
- Early Stopping: Patience 8
Citation
If you use this model, please cite:
@misc{ai_racing_model,
title={AI Racing Game Neural Network},
author={Your Name},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/Relacosm/theline-v1}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support