Batch Normalization for Neural Networks: Makemore (Part 3)
In this repository, I implemented Batch Normalization within a neural network framework to enhance training stability and performance, following Andrej Karpathy's approach in the Makemore - Part 3 video.
Overview
This implementation focuses on:
- Normalizing activations and gradients.
- Addressing initialization issues.
- Utilizing Kaiming initialization to prevent saturation of activation functions.
Additionally, visualization graphs were created at the end to analyze the effects of these techniques on the training process and model performance.
Documentation
For a better reading experience and detailed notes, visit my Road to GPT Documentation Site.
Acknowledgments
Notes and implementations inspired by the Makemore - Part 3 video by Andrej Karpathy.
For more of my projects, visit my Portfolio Site.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support