Multi-Layer Perceptron Language Model: Makemore (Part 2)

In this repository, a Multi-Layer Perceptron (MLP) language model inspired by the Bengio et al. (2003) research paper has been implemented for character-level predictions, following Andrej Karpathy's approach in the Makemore - Part 2 video.

Overview

The implementation demonstrates building and training the MLP model for sequence prediction while further enhancing the understanding of neural network architectures for language modeling.

Documentation

For a better reading experience and detailed notes, visit my Road to GPT Documentation Site.

Acknowledgments

Notes and implementations inspired by the Makemore - Part 2 video by Andrej Karpathy.

For more of my projects, visit my Portfolio Site.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train MuzzammilShah/NeuralNetworks-LanguageModels-2