Interactive laboratory to explore how neural networks transform data. Activation functions introduce non-linearity, allowing deep learning models to learn complex patterns. Select any function below to experiment visually and mathematically.
Fast and widely used in deep learning networks.
Prevents dying neurons by allowing small negative slopes.
Learnable slope for negative inputs.
Outputs values between 0 and 1, commonly used in binary classification.
Outputs values between −1 and 1 and is zero-centered.
Self-normalizing activation used in deep networks.
Outputs either 0 or 1 depending on threshold.
Identity activation often used in regression output layers.
Converts outputs into probabilities for multi-class classification.