Architecture Playground
Visually understand how neural network architectures work, how parameters scale, memory changes, and compute grows. See the real math behind the models.
MLP Configuration
3
2
4
2
Total Parameters
46
36 Weights + 10 Biases
Compute (FLOPs)
72
per Forward Pass
Train Step: 216
Storage Memory
184 B
At FP32 precision
MLP Deep Dive
All the math behind your 3→4×2→2 network — live
A Multi-Layer Perceptron is a stack of fully-connected layers. Every neuron in one layer sends a signal to every neuron in the next — that is why it's called fully connected.
Input (3)→Hidden 1 (4)→Hidden 2 (4)→Output (2)
| Layer | Neurons | Weights | Biases | Subtotal | Math |
|---|---|---|---|---|---|
| Input Layer | 3 | 0 | 0 | 0 | Raw features — no params |
| Hidden 1 | 4 | 12 | 4 | 16 | z = Σ(w·x) + b → a = σ(z) |
| Hidden 2 | 4 | 16 | 4 | 20 | z = Σ(w·x) + b → a = σ(z) |
| Output Layer | 2 | 8 | 2 | 10 | Softmax → class probabilities |
| Grand Total | 46 | 36 W + 10 B | |||