Neural Networks 101
Discover the structural building blocks of deep learning—and learn how they fit together in practice.
What you’ll learn
- Build the perceptron and extend it to multi-layer networks (MLPs).
- Choose and use activation functions (ReLU, tanh, sigmoid, GELU—when and why).
- Match loss functions to tasks (MSE, binary cross-entropy, softmax cross-entropy).
- Implement backpropagation via the chain rule.
- How nonlinearity gives networks representational power (XOR and beyond).
- How loss + activation pairings shape gradients (e.g., softmax + cross-entropy).
Hands-on application
- Implement a 2-layer MLP from scratch (NumPy): forward pass, backward pass, and updates.
- Rebuild the same model in PyTorch, writing a minimal training loop and comparing to your scratch version.
Prerequisites
- Comfort with derivatives & the chain rule, vectors/matrices.
- Familiarity with logistic/linear regression and cross-entropy/MSE.
- Python experience (NumPy; PyTorch optional but introduced).
Who it’s for
- Developers who know basic ML and want a rigorous, code-first path into deep learning.
- Learners who prefer understanding the internals before stacking layers.
Format
- Duration: ~6–8 hours, fully self-paced.
- Structure: 6 modules → brief lesson, worked derivation, coding notebook.