Developing an MLP-Based AI/ML Model for Sudoku Puzzle Solving 1. Introduction to AI/ML Sudoku Solvers Sudoku, a widely recognized logic-based combinatorial number-placement puzzle, presents a compelling challenge for Artificial Intelligence and Machine Learning models. The objective of Sudoku is to populate a 9x9 grid, which is further subdivided into nine 3x3 subgrids, with digits ranging from 1 to 9. The fundamental constraint is that each digit must appear exactly once within each row, each column, and each of the nine 3x3 subgrids.1 This seemingly straightforward set of rules belies a vast solution space, making Sudoku an effective domain for evaluating diverse AI methodologies. Within this context, neural networks, particularly Multilayer Perceptrons (MLPs), have been explored for their capacity to recognize complex patterns and contribute to solving these puzzles. A significant observation in the application of neural networks to Sudoku solving concerns the performance advantages and disadvantages of purely neural network "direct solution" models versus hybrid approaches. This distinction arises because Sudoku's inherent nature, characterized by inflexible rules and a reliance on logical deductions, makes it an unusually effective domain for exploring complex reasoning within AI.1 The task is not merely about pattern matching, but about adhering to strict combinatorial constraints. Consequently, a purely data-driven model, such as a basic MLP, might struggle to implicitly learn and consistently enforce all the intricate logical deductions required for a valid Sudoku solution. This leads to the understanding that while direct neural network solutions can achieve high one-shot accuracy, their effectiveness in guaranteeing logically sound outcomes for complex puzzles is often enhanced when they are integrated with explicit rule enforcement mechanisms or traditional search algorithms.6 This combined approach leverages the strengths of both paradigms, allowing the neural network to identify patterns and probabilities while a classical solver ensures logical consistency. 2. Existing MLP-Based Sudoku Solver Models Various neural network architectures have been employed to tackle Sudoku, including Convolutional Neural Networks (CNNs) and Graph Neural Networks (GNNs).7 Multilayer Perceptrons (MLPs) also play a role in research and implementations, either as standalone solvers for specific sub-problems or as integral components within larger, more complex systems. Direct MLP Implementations One notable example of a direct MLP application is found in the Ritvik19/SudokuNet project. This initiative explicitly features a Feed Forward Neural Network (FFN), which is a form of MLP, as one of its predefined models designed for Sudoku puzzle resolution.9 The project's stated purpose is to serve as a research and experimental platform, investigating the capabilities of neural networks in the Sudoku domain.10 Another instance illustrates a "basic MLP with one hidden layer" employed to calculate cell probabilities within a recurrent neural network framework for Sudoku solving.11 This demonstrates that MLPs can function as core probabilistic components embedded within more sophisticated architectures. Furthermore, an analog neural network, effectively a fully-connected (MLP-like) structure, was specifically engineered to learn the rules of 2x2 Sudoku-like puzzles.12 Although applied on a smaller scale, this highlights the fundamental applicability of MLPs to problems involving constraint learning. Hybrid Approaches and the Role of MLPs Many successful AI models for Sudoku adopt a hybrid strategy, integrating neural networks with traditional algorithmic solvers such as backtracking or constraint satisfaction. For example, several projects combine Deep Learning models, often CNNs for optical character recognition (OCR) of digits, with a backtracking algorithm to complete the puzzle.8 In such a system, an MLP could potentially be utilized in the digit recognition phase, classifying the content of each cell. Research indicates that "direct solution models," which can encompass MLPs or other feedforward networks, are trainable on correct solutions.6 However, a nuanced understanding reveals that while these direct models can enhance one-shot accuracy, backtracking models, which employ sequential search, may exhibit superior performance on Sudoku under equivalent test-time computational budgets. This is particularly true when backtracking models are allowed to discover novel search strategies through reinforcement learning.6 From certain perspectives, neural networks are considered "not very accurate" or "not ideal candidates" for Sudoku due to its precise, calculative, and deduction-driven nature, which often favors simpler recursive or search methods, or approaches based on constraint satisfaction problems.1 This critique typically applies to purely end-to-end neural network solvers that attempt to implicitly learn all combinatorial rules. The observation here is that the inherent logical constraints of Sudoku often necessitate coupling neural networks with explicit rule enforcement mechanisms or search algorithms for robust and verifiable solutions. This design approach addresses the neural network's limitations in symbolic logical deduction by delegating that aspect to a classical solver, thereby allowing the neural network to concentrate on pattern recognition or guiding the search process. Beyond serving as a standalone solver, MLPs frequently function as powerful components within larger Sudoku-solving frameworks. They can act as probabilistic classifiers for individual cell values or as guides for search algorithms, rather than being solely responsible for implicitly learning all combinatorial rules. This division of labor, where a neural network handles pattern or probability prediction and a classical algorithm manages logical consistency and constraint satisfaction, represents a sophisticated design pattern for problems that involve both perceptual and logical elements. 3. Inputs and Components of a Sudoku-Solver_AI_Model.py Developing a Sudoku-Solver_AI_Model.py based on an MLP architecture necessitates careful consideration of data representation, network design, and potential integration with other algorithms. Input Representation The most prevalent input format for a 9x9 Sudoku grid involves representing its 81 cells. This can take the form of a flattened 1D array comprising 81 integers 1 or a 9x9 2D matrix.1 Empty cells within the puzzle are consistently denoted by the digit '0'.1 For optimal neural network processing, raw digit values (0-9) typically undergo preprocessing. One common encoding method is one-hot encoding, where each cell's numerical value is transformed into a vector. For instance, a digit like '2' would become a 10-element vector (e.g., ). If all 81 cells are processed concurrently with one-hot encoding, the input layer could require 810 features (81 cells * 10 possible values, including 0 for empty).11 Empty cells might be represented by an all-zero vector or a dedicated '0' category. Additionally, numerical normalization is frequently applied. This involves scaling values, such as dividing them by 9 and then subtracting 0.5, to bring them within a range like -0.5 to 0.5.7 This practice is fundamental for enhancing neural network performance, as models generally exhibit improved generalization when trained on zero-centered normalized data.7 More advanced models may incorporate explicit constraint encoding into their input or network structure. A "constraint mask tensor" can be constructed to provide the network with direct information about valid numbers for each cell, based on the rules governing rows, columns, and 3x3 boxes.11 This tensor, often with dimensions (81, 3, 81), enumerates the 81 cells, the three types of constraints (row, column, box), and the specific cells that impose constraints on the cell in question.11 MLP Architecture A Multilayer Perceptron is fundamentally composed of an input layer, one or more hidden layers, and an output layer.17 The input layer comprises neurons corresponding to the number of features in the input data.17 For an 81-cell Sudoku, if each cell's normalized value is directly input, the input layer might have 81 neurons. If one-hot encoded (0-9), this would expand to 810 neurons. Hidden layers consist of interconnected neurons that perform computations by applying an activation function to the weighted sum of their inputs and biases.17 The selection of the number of hidden layers and neurons is a critical hyperparameter.18 General recommendations suggest commencing with a simpler architecture, such as a single hidden layer, and progressively increasing complexity as warranted.17 The neuron count in hidden layers is often advised to be between the sizes of the input and output layers, or approximately two-thirds the size of the input layer plus the output layer size.19 The output layer is responsible for generating the network's final predictions. For Sudoku, this typically involves predicting the digit for each of the 81 cells. This can manifest in two primary ways: Direct Prediction (81x1): When using a loss function such as sparse_categorical_crossentropy, the output shape can be (81,1), where each of the 81 outputs directly predicts a digit from 0 to 9 for a corresponding cell.1 This configuration implies an internal mapping from the network's raw output to the final digit. Probabilistic Output (81x9 or 81x10): More commonly, the output layer may contain 81 * 9 (or 81 * 10 if '0' is treated as a distinct category) neurons. A softmax activation function is then applied across the 9 (or 10) possible digits for each of the 81 cells.11 This yields a probability distribution over the possible digits for each cell, from which the most probable digit (via argmax) can be selected. Activation functions are essential non-linear components that enable MLPs to learn complex patterns.17 Common choices for hidden layers include the Rectified Linear Unit (ReLU) 18, Sigmoid, and Tanh functions.17 For the output layer in multi-class classification tasks (predicting one of 9 digits per cell), Softmax is the typical choice.11 Integration with Classical Algorithms (Sudoku-Solver_AI_Model.py) A robust Sudoku-Solver_AI_Model.py is likely to be a modular system, separating concerns such as image processing, digit recognition, and the core Sudoku solving logic. This modularity enhances debuggability, maintainability, and allows for specialized optimization of each component. The consistent observation of a pipeline—Image input leading to Preprocessing, then Digit Recognition (often via CNN), and finally the Sudoku Solver (combining NN with Backtracking/CSP) 3—underscores that a single, monolithic MLP attempting to manage everything from pixel data to the final solution is less practical or effective. By decomposing the problem, each module can be optimized for its specific task. For example, a CNN excels at digit recognition from images, an MLP can effectively map a symbolic grid state to potential next moves, and a backtracking algorithm ensures logical consistency. This modular design aligns with best practices in complex software engineering and is highly applicable to AI systems addressing multi-faceted problems. Backtracking: Many AI Sudoku solvers employ a neural network for tasks like digit recognition (if the input is an image) or initial cell value prediction, subsequently utilizing a backtracking algorithm to complete and validate the puzzle.8 The backtracking algorithm operates by recursively exploring possible paths, filling empty cells, and reverting (backtracking) if a dead end is encountered.8 This recursive process is crucial for ensuring the solution adheres to all Sudoku rules. Constraint Propagation/Satisfaction (CSP): Sudoku is fundamentally a Constraint Satisfaction Problem.15 MLPs can be integrated with CSP solvers, where the neural network might predict initial values or probabilities, and a CSP engine then rigorously enforces the strict rules of Sudoku (e.g., uniqueness within rows, columns, and 3x3 boxes).25 This hybrid approach has the advantage of correcting classifier mistakes and guaranteeing a feasible solution.25 Iterative Refinement: Some models employ an iterative approach. In this method, the partial solution generated by one neural network inference step is fed back as input for the subsequent iteration. This process continues, gradually filling the grid, until all empty cells (represented by zeros) are populated.14 The algorithm proceeds for 'N' iterations, where 'N' corresponds to the number of unfilled positions in the puzzle. Preprocessing Steps When the input to the system is an image of a Sudoku puzzle, such as from a webcam, extensive image processing is required before the numerical grid can be presented to the MLP: Board Extraction: Techniques like thresholding, greyscaling, Gaussian Blur (to mitigate background noise), Canny edge detection, contour detection, and perspective transform are applied to accurately locate, crop, and deskew the Sudoku grid from the image.3 Digit Recognition (OCR): After the individual cells are isolated, Optical Character Recognition (OCR) is performed to identify the digits within them. This often involves training a separate CNN model on large datasets of handwritten or printed digits, such as MNIST, or custom datasets derived from Sudoku images.3 The precision of this step is paramount, as even a single misclassified digit can render the entire board invalid.3 A key implication here is that while MLPs excel at pattern recognition, directly encoding Sudoku's rigid rules (uniqueness in rows, columns, and 3x3 blocks) into the network architecture or input representation (e.g., through constraint masks) is a more effective strategy than solely relying on the network to implicitly learn these hard constraints from data. Sudoku rules are explicit and inflexible.1 Simply feeding a grid to an MLP might result in statistically plausible but logically invalid solutions. The use of "constraint mask tensors" 11 or the combination with CSP solvers 25 explicitly provides the network with information about cell interdependencies or offloads rule enforcement to a dedicated logical engine. This highlights a fundamental challenge for neural networks in symbolic reasoning: explicit constraint integration often yields superior performance compared to implicit learning for problems characterized by strict logical rules. 4. Training Data Requirements for Sudoku-Solver_AI_Model.py The effectiveness of a data-driven MLP model for Sudoku is critically dependent on both the quantity and quality of its training data. Minimum Solved Sudoku Games Research consistently demonstrates that high-performing neural network Sudoku solvers are trained on exceptionally large datasets. For instance, the Ritvik19/SudokuNet project utilized a dataset comprising 17 million Sudoku puzzles.9 Other implementations report training on 10 million puzzles 14 or 9 million puzzles.1 Even models focusing on specific aspects, such as GraphSAGE or CNNs, employed datasets of 1 million Sudoku games, typically partitioned with 800,000 for training and 200,000 for testing or validation.7 These datasets typically consist of pairs of unsolved puzzles (quizzes) and their corresponding solutions, often represented as strings of 81 characters or 9x9 matrices.1 The character '0' commonly signifies an unfilled square.1 The sheer volume of data underscores the necessity for diverse puzzle configurations to enable the network to generalize effectively across various difficulties and initial states. The consistently massive scale of training data (millions of puzzles) indicates that for MLPs to effectively learn the complex, non-linear relationships and implicit "rules" of Sudoku, they require extensive exposure to the problem space. This is a direct consequence of Sudoku's combinatorial nature and the neural network's reliance on statistical patterns. To generalize well across the vast number of possible Sudoku puzzles and their solutions, a model must encounter a substantial portion of this diversity. Without sufficient data, the model would likely overfit to the training examples or fail to learn the underlying logical constraints, leading to suboptimal performance on unseen puzzles. This implies that data acquisition or generation is a primary resource consideration for such projects. Beyond mere quantity, the effectiveness of training also hinges on how the data is prepared and represented. High-quality, well-preprocessed data significantly enhances the learning process and model performance. For example, sources emphasize the importance of numerical normalization, ensuring data is "zero-centered" for "enhanced performance".7 The use of one-hot encoding 11 and the explicit encoding of structural information, such as the "constraint mask tensor" 11, further illustrate that the way puzzles are presented to the network, through careful feature engineering and data preprocessing, is as vital as the raw volume of examples. The model's capacity to learn is directly tied to how effectively the input features capture the essence of the problem. Table 1: Training Dataset Sizes for Sudoku AI Models Model/Study Training Dataset Size (Puzzles) Source Notes Ritvik19/SudokuNet 17,000,000 9 Includes puzzle configurations, solutions, difficulty levels, and sources. Deep Sudoku Solver (Kaggle) 9,000,000 1 Can be combined with a 1M dataset for more data. AI-Sudoku-Solver (GitHub) 10,000,000 14 Used for the Sudoku solver model (9x9 arrays of integers). Extending GraphSAGE / Stanford Project 800,000 (from 1M total) 7 1M total, 80% train / 20% test. Test set of 5,000 puzzles. Semantic Segmentation (Digit Recognition) ~100 images 20 For digit recognition only, not the full solver. 5. Training Epochs and Convergence for Sudoku-Solver_AI_Model.py Determining the optimal number of epochs for training an MLP-based Sudoku solver is a critical aspect of model development, requiring a balance between preventing underfitting and avoiding overfitting. Typical Epoch Ranges Specific epoch counts for MLP Sudoku solvers are not consistently provided across the available information, as this parameter is highly dependent on the particular architecture, the size of the dataset, and the desired level of performance. One documented instance for a semantic segmentation network, which is a sub-component used for digit recognition in Sudoku, reported "roughly 20 minutes to run through 40 epochs".20 This figure pertains to a specialized part of the system, not the comprehensive Sudoku solver. More broadly, for complex real-world problems, neural networks may necessitate "hundreds of epochs" to reach convergence.29 The Ritvik19/SudokuNet project allows the number of epochs to be specified as a command-line argument (--epochs EPOCHS) during training, indicating its configurable nature rather than a fixed default in the provided documentation.9 Factors Influencing Epochs Several factors significantly influence the number of epochs required for effective training: Dataset Size: Larger datasets generally demand more epochs to adequately capture the diverse patterns present within the data.30 Conversely, smaller datasets might converge more rapidly, but they carry a higher risk of overfitting if trained for too many epochs.30 Model Complexity: Models with greater complexity, characterized by more layers or a higher neuron count, possess an increased capacity to learn intricate patterns. Consequently, these models may require additional epochs to converge to their optimal performance.30 Learning Rate and Optimizer: The choice of optimization algorithm, such as Adam (used by Ritvik19/SudokuNet with a learning rate of 1e-3 9), and the specific learning rate chosen, profoundly impact the speed and stability of convergence. Some neural network Sudoku solvers achieved optimal results with a learning rate close to 0.001.7 Batch Size: The batch size, for example, 64K for Ritvik19/SudokuNet 9, affects the number of iterations within each epoch and the stability of gradient updates, thereby indirectly influencing the total epochs needed. Convergence Criteria and Early Stopping During the training process, it is standard practice to monitor learning curve graphs, which plot metrics such as loss (or error) and accuracy against the number of epochs.29 The objective is to observe a consistent decrease in loss and an increase in accuracy until the model converges.29 Early stopping is a crucial technique employed to prevent overfitting and optimize training time.17 This method involves halting the training process when the model's performance on a separate validation set (an indicator of generalization error) begins to deteriorate.29 A "patience" parameter is typically configured, which defines the number of epochs the system will wait for an improvement in validation performance before terminating training.30 This strategy allows for setting an initially high maximum number of epochs, ensuring that the model learns sufficiently from the data without expending excessive computational resources or memorizing noise from the training set.29 A dedicated validation set is indispensable for monitoring generalization performance and effectively implementing early stopping.17 Datasets are commonly partitioned into training, validation, and test sets.17 For instance, a 95% training and 5% validation split was utilized for a 9 million Sudoku dataset.1 An important implication is that instead of seeking a fixed "minimum" number of epochs, the more sophisticated approach for training MLP Sudoku solvers involves dynamic monitoring of validation performance and the implementation of early stopping. This adaptive strategy is crucial for achieving optimal generalization and preventing overfitting, especially given the large datasets and complex models involved. The practical application for a Sudoku-Solver_AI_Model.py is that the code should incorporate early stopping callbacks to manage training efficiently. Furthermore, the number of epochs is not an isolated hyperparameter but is deeply interconnected with dataset size, model complexity, learning rate, and batch size. Optimizing one often necessitates tuning others, underscoring the iterative and experimental nature of deep learning model development. For example, a change in dataset size might require a different learning rate or model architecture, which in turn affects the number of epochs needed for convergence. This highlights that the development process involves a multi-dimensional hyperparameter search rather than optimizing each parameter in isolation. 6. Conclusion and Recommendations Developing an MLP-based AI/ML model for solving Sudoku puzzles is a multifaceted undertaking that benefits significantly from a hybrid methodological approach, access to substantial data, and adaptive training strategies. Key Findings MLP Applicability: While some perspectives suggest that MLPs may not be ideally suited for the inherently logical nature of Sudoku, they are demonstrably employed in various capacities, particularly within hybrid systems or as components responsible for probabilistic predictions. Examples include the Ritvik19/SudokuNet project and work that uses MLPs for scoring cell probabilities. Hybrid System Efficacy: The most effective Sudoku solvers frequently integrate neural networks (for pattern recognition and initial value predictions) with classical algorithms such such as backtracking or constraint satisfaction problems (CSPs). This synergistic approach addresses the inherent limitations of purely data-driven models in handling the rigid logical constraints of Sudoku, leading to more robust and verifiable solutions. Data Scale Requirement: Training high-performing MLP Sudoku solvers demands exceptionally large datasets, typically ranging from 1 million to 17 million solved puzzles. This extensive data volume is critical for enabling the model to learn the vast combinatorial patterns and generalize effectively to unseen puzzles. Adaptive Training Necessity: There is no single, fixed "minimum" number of epochs for training. Instead, training convergence is optimally managed through dynamic monitoring of validation performance and the strategic implementation of early stopping. This adaptive technique is vital for preventing overfitting and efficiently utilizing computational resources. Practical Recommendations for Sudoku-Solver_AI_Model.py Based on the analysis, the following recommendations are provided for developing and training a Sudoku-Solver_AI_Model.py: Adopt a Modular Hybrid Architecture: For robust and accurate performance, consider a modular design. An MLP (or a CNN if the input is image-based) should handle tasks such as digit recognition or initial cell value predictions. A classical algorithm, such as a backtracking algorithm or a CSP solver, should then be responsible for rigorously enforcing Sudoku rules and deriving the final, valid solution. Standardize Input and Output Representations: Represent Sudoku grids as flattened 81-element vectors or 9x9 matrices. Utilize one-hot encoding for cell values (e.g., 10 categories for 0-9) and normalize numerical inputs (e.g., scaling values to a range like -0.5 to 0.5). These preprocessing steps are crucial for enhancing network performance and learning efficiency. Secure Large-Scale Datasets: Plan for the acquisition or generation of millions of solved Sudoku puzzles for training purposes. The effectiveness of the model is highly dependent on the diversity and volume of the training data. Proper data preprocessing, including normalization and appropriate encoding, is as critical as the sheer quantity of data. Implement Early Stopping: Avoid relying on a predetermined, fixed number of epochs. Instead, configure the training process with a sufficiently high maximum epoch count but incorporate an early stopping mechanism. This mechanism should monitor a chosen metric (e.g., validation loss or accuracy) and halt training when performance on the validation set ceases to improve. This ensures optimal generalization and efficient resource utilization. Engage in Systematic Hyperparameter Experimentation: Continuously experiment with various MLP architectures, including the number of hidden layers, the neuron count within each layer, and the choice of activation functions (e.g., ReLU for hidden layers, Softmax for the output layer). Additionally, tune learning rates (with 0.001 being a common starting point) and batch sizes. Hyperparameter optimization is an iterative process that is fundamental for maximizing model performance. Explore Explicit Constraint Integration: For more advanced MLP designs, investigate methods to explicitly encode Sudoku constraints directly into the network's input representation (e.g., using constraint masks) or consider integrating logical layers that can enforce these rules during the inference phase. This can help the neural network adhere to the strict logical requirements of the puzzle more effectively. Works cited Deep Sudoku Solver (Multiple Approaches) - Kaggle, accessed June 29, 2025, https://www.kaggle.com/code/yashchoudhary/deep-sudoku-solver-multiple-approaches An Artificial Intelligence-based Solution to Sudoku - Supply Chain Link Blog, accessed June 29, 2025, https://blog.arkieva.com/an-artificial-intelligence-based-solution-to-sudoku/ AI Academy Capstone Projects: computer vision based Sudoku solver - Kainos, accessed June 29, 2025, https://www.kainos.com/insights/blogs/ai-academy-capstone-projects--improving-document-data-extraction-through-contextualisation-computer-vision-based-sudoku-solver Sudoku implementation : r/reinforcementlearning - Reddit, accessed June 29, 2025, https://www.reddit.com/r/reinforcementlearning/comments/1c8muem/sudoku_implementation/ Sudoku-Bench: Evaluating creative reasoning with Sudoku variants - arXiv, accessed June 29, 2025, https://arxiv.org/html/2505.16135v1 To Backtrack or Not to Backtrack: When Sequential Search Limits ..., accessed June 29, 2025, https://arxiv.org/pdf/2504.07052 Extending GraphSAGE to Solve Sudoku. - DEV Community, accessed June 29, 2025, https://dev.to/sammoorsmith/extending-graphsage-to-solve-sudoku-2d8j ps-19/Sudoku-Deep_Learning_Model: A simple machine learning based project. It aims to solve sudoku through webcam. - GitHub, accessed June 29, 2025, https://github.com/ps-19/Sudoku-Deep_Learning_Model Ritvik19/SudokuNet: Ai Sudoku Solver - GitHub, accessed June 29, 2025, https://github.com/Ritvik19/SudokuNet Ritvik19/SudokuNet - Hugging Face, accessed June 29, 2025, https://huggingface.co/Ritvik19/SudokuNet Sudoku RNN in PyTorch. We construct a simple recurrent neural… | by Josef Lindman Hörnlund | Medium, accessed June 29, 2025, https://medium.com/@josef_44177/sudoku-rnn-in-pytorch-d1fddef850a8 An analog neural network that learns Sudoku-like puzzle rules - ResearchGate, accessed June 29, 2025, https://www.researchgate.net/publication/312559707_An_analog_neural_network_that_learns_Sudoku-like_puzzle_rules zachrussell12/Sudoku-Solver: Using CNN's and OpenCV to read in Sudoku puzzles from images or the camera and utilizing backtracking to solve them. WIP - GitHub, accessed June 29, 2025, https://github.com/zachrussell12/Sudoku-Solver anilsathyan7/AI-Sudoku-Solver: Solving Sudoku Puzzles With Computer Vision And Neural Networks - GitHub, accessed June 29, 2025, https://github.com/anilsathyan7/AI-Sudoku-Solver neural network for sudoku solver - Stack Overflow, accessed June 29, 2025, https://stackoverflow.com/questions/44397123/neural-network-for-sudoku-solver Solving Sudoku with Neural Networks Charles Akin-David, Richard Mantey {aakindav, rmantey}@stanford.edu, accessed June 29, 2025, https://cs230.stanford.edu/files_winter_2018/projects/6939771.pdf Multilayer Perceptrons in Machine Learning: A Comprehensive Guide - DataCamp, accessed June 29, 2025, https://www.datacamp.com/tutorial/multilayer-perceptrons-in-machine-learning Deep Dive into Multilayer Perceptron - Number Analytics, accessed June 29, 2025, https://www.numberanalytics.com/blog/deep-dive-multilayer-perceptron-neural-networks One hot encoding in MLP question : r/neuralnetworks - Reddit, accessed June 29, 2025, https://www.reddit.com/r/neuralnetworks/comments/awv9h4/one_hot_encoding_in_mlp_question/ Sudoku Solver: Image Processing and Deep Learning - MathWorks Blogs, accessed June 29, 2025, https://blogs.mathworks.com/deep-learning/2018/11/15/sudoku-solver-image-processing-and-deep-learning/ Algorithm to Solve Sudoku | Sudoku Solver - GeeksforGeeks, accessed June 29, 2025, https://www.geeksforgeeks.org/dsa/sudoku-backtracking-7/ darkeclipz/sudoku-csp: Solving Sudoku as a Constraint Satisfaction Problem (CSP) - GitHub, accessed June 29, 2025, https://github.com/darkeclipz/sudoku-csp Sudoku puzzles, Constraint programming and Graph Theory - OpenSourc.ES, accessed June 29, 2025, https://opensourc.es/blog/sudoku/ Making a Sudoku Solver: Having no Idea What I m doing., accessed June 29, 2025, https://mrlokans.work/posts/making-a-sudoku-solver/ Perception-based constraint solving for sudoku images - OpenReview, accessed June 29, 2025, https://openreview.net/forum?id=a9o7WBjEbo&referrer=%5Bthe%20profile%20of%20Tias%20Guns%5D(%2Fprofile%3Fid%3D~Tias_Guns2) Designing Logic Tensor Networks for Visual Sudoku puzzle classification - University of Oxford Department of Computer Science, accessed June 29, 2025, https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2023/paper19.pdf Designing Logic Tensor Networks for Visual Sudoku puzzle classification - CEUR-WS, accessed June 29, 2025, https://ceur-ws.org/Vol-3432/paper19.pdf Augmented Reality-based Sudoku Solver with Training Module to Improve Cognitive Skills, accessed June 29, 2025, https://www.researchgate.net/publication/385359219_Augmented_Reality-based_Sudoku_Solver_with_Training_Module_to_Improve_Cognitive_Skills Epoch in Neural Networks | Baeldung on Computer Science, accessed June 29, 2025, https://www.baeldung.com/cs/epoch-neural-networks Determining the Number of Epochs - Medium, accessed June 29, 2025, https://medium.com/@rsvmukhesh/determining-the-number-of-epochs-d8b3526d8d06 --- license: mit ---