Architecture Playground

Visually understand how neural network architectures work, how parameters scale, memory changes, and compute grows. See the real math behind the models.

MLP Configuration

3
2
4
2
Total Parameters
46
36 Weights + 10 Biases
Compute (FLOPs)
72
per Forward Pass
Train Step: 216
Storage Memory
184 B
At FP32 precision

MLP Deep Dive

All the math behind your 34×22 network — live

A Multi-Layer Perceptron is a stack of fully-connected layers. Every neuron in one layer sends a signal to every neuron in the next — that is why it's called fully connected.

Input (3)Hidden 1 (4)Hidden 2 (4)Output (2)
LayerNeuronsWeightsBiasesSubtotalMath
Input Layer3000Raw features — no params
Hidden 1412416z = Σ(w·x) + b → a = σ(z)
Hidden 2416420z = Σ(w·x) + b → a = σ(z)
Output Layer28210Softmax → class probabilities
Grand Total4636 W + 10 B