The Sigmoid Function: Foundation of Neural Network
Last Updated on September 29, 2025 by Editorial Team
Author(s): Niraj
Originally published on Towards AI.
Series: Foundation of AI — Blog 1

Every modern neural network stands on mathematical pillars.
One of the most important is the sigmoid activation function.
It’s not just a formula; it’s the bridge between linear math and nonlinear learning.
What is the Sigmoid?
Defined as:
σ(z) = 1 / (1 + e⁻ᶻ)
Takes any real number and compresses it into a value between 0 and 1. Think of it as a soft decision-maker: instead of “True/False”, it says “how likely is True?”.
Why Sigmoid Matters
Before sigmoid, models could only perform linear separation. With sigmoid, neurons could model probabilities and learn complex curves. It gave neural networks their first real ability to handle classification.
The sigmoid’s ability to output values between 0 and 1 makes it ideal for:
- Probability estimation — interpreting outputs as likelihoods
- Binary classification — distinguishing between two classes
- Gradient-based learning — enabling smooth weight updates
Before functions like sigmoid, neural networks could only handle linear separation problems.
Derivative: The Learning Engine
The mathematical elegance lies in how the sigmoid changes during training. Its derivative is simple yet profound:
dσ/dz = σ(z) ⋅ (1 − σ(z))
This compact formula allows gradients to flow backward, enabling backpropagation. Without it, the concept of deep learning would have remained a theory.
How We Derive This
Step 1: Start with the function
σ(z) = (1 + e⁻ᶻ)⁻¹
Step 2: Apply the Chain Rule
dσ/dz = -1 ⋅ (1 + e⁻ᶻ)⁻² ⋅ d/dz(1 + e⁻ᶻ)
Step 3: Differentiate the Inner Function
d/dz(1 + e⁻ᶻ) = -e⁻ᶻ
Step 4: Combine the Results
dσ/dz = -1 ⋅ (1 + e⁻ᶻ)⁻² ⋅ (-e⁻ᶻ) = e⁻ᶻ / (1 + e⁻ᶻ)²
Step 5: Express in Terms of σ(z)
Notice that:
- σ(z) = 1 / (1 + e⁻ᶻ)
- 1 — σ(z) = e⁻ᶻ / (1 + e⁻ᶻ)
Multiplying them gives:
σ(z) ⋅ (1 — σ(z)) = [1 / (1 + e⁻ᶻ)] ⋅ [e⁻ᶻ / (1 + e⁻ᶻ)] = e⁻ᶻ / (1 + e⁻ᶻ)²
Final Result: dσ/dz = σ(z) ⋅ (1 — σ(z))
Why This Matters for Learning
This derivative is computationally efficient because it reuses the neuron’s current output. During backpropagation, it determines how much each weight should change, making neural network training practical and efficient.
The sigmoid function demonstrated that neural networks could learn from data through mathematical optimization, paving the way for modern deep learning.
Next in series: The limitations of sigmoid and the evolution to modern activation functions.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.