Stage 2 Simple Universal LSTM Model Architecture | |
============================================= | |
Model Type: Sequential LSTM | |
Training Date: 20250705_170829 | |
Training Stocks: 20 stocks | |
Features: 6 features | |
Target: Close | |
Architecture: | |
1. LSTM Layer 1: 50 units, return_sequences=True | |
2. Dropout: 0.2 | |
3. LSTM Layer 2: 50 units, return_sequences=False | |
4. Dropout: 0.2 | |
5. Dense Output: 1 unit | |
Training Parameters: | |
- Batch Size: 16 | |
- Learning Rate: 0.001 | |
- Epochs Trained: 88 | |
- Early Stopping Patience: 12 | |
Performance Metrics: | |
- RMSE: $26.85 | |
- MAE: $12.60 | |
- R�: 0.9903 | |
- MAPE: 2.68% | |
Features Used: | |
- Open | |
- High | |
- Low | |
- Close | |
- Volume | |
- sentiment_10d_avg | |
Stocks Trained On: | |
- AAPL | |
- AMZN | |
- AVGO | |
- BRK.B | |
- COST | |
- GOOG | |
- JNJ | |
- JPM | |
- LLY | |
- MA | |
- META | |
- MSFT | |
- NFLX | |
- NVDA | |
- ORCL | |
- PG | |
- TSLA | |
- V | |
- WMT | |
- XOM | |
Usage Instructions: | |
1. Load model: model = tf.keras.models.load_model('../models\stage2_universal_lstm_20250705_170829.keras') | |
2. Load scalers: scalers = joblib.load('../models\stage2_scalers_20250705_170829.pkl') | |
3. Preprocess data using feature_scaler and target_scaler | |
4. Make predictions on sequences of shape (batch_size, 60, 6) | |
5. Inverse transform predictions using target_scaler | |