Micrograd-Mini is a tiny scalar-valued autograd engine with a basic neural network framework β built from scratch, inspired by Andrej Karpathy's micrograd.
It supports reverse-mode automatic differentiation (backpropagation) and is capable of training a simple neural network (MLP) on tasks like XOR. Great for learning the internals of backprop!
- β Reverse-mode autodiff (backpropagation)
- β Dynamic computation graph (DAG)
- β
Custom
Value
class with gradients - β Tanh and ReLU activations
- β Basic Neuron, Layer, and MLP implementation
- β Trains on XOR dataset
micrograd-mini/
βββ engine.py # Core autodiff engine (Value class)
βββ nn.py # Neural network components
βββ train.py # Training loop for XOR
βββ example.py # Better example using XOR dataset
βββ README.md # Project info
- Python 3.7+
- No external libraries needed (pure Python)
python train.py
--- Final Predictions after Training ---
Input: [0.0, 0.0] => Predicted: 0.01 | Target: 0.0
Input: [0.0, 1.0] => Predicted: 0.98 | Target: 1.0
Input: [1.0, 0.0] => Predicted: 0.97 | Target: 1.0
Input: [1.0, 1.0] => Predicted: 0.03 | Target: 0.0
Training complete! π―
Want to really understand backpropagation and gradients?
-
Dive into engine.py and explore the Value class
-
Inspect how operations dynamically build a graph
-
See how .backward() traverses it for gradient computation β just like real frameworks!
This project is heavily inspired by micrograd by Andrej Karpathy, licensed under the MIT License.
This project is licensed under the MIT License. See the LICENSE file for details.
Built with β€οΈ by Muawiya, as part of a deep dive into AI, neural nets, and autodiff fundamentals.
- πΊ YouTube: @Coding_Moves
- π» GitHub: Muawiya
- π§ LeetCode: Moavia_Amir
- π Kaggle: Moavia Amir