Normal Distribution Calculator

Calculate z-scores, P(X<x), P(X>x), and P(a<X<b) for any normal distribution. Includes inverse normal (find x given P), 68-95-99.7 rule, and Central Limit Theorem.

z-Score
P(X < x)
P(X > x)
PDF f(x)
Extended More scenarios, charts & detailed breakdown
P(Z < z)
P(Z > z)
P(−|z| < Z < |z|)
68-95-99.7 Rule Check
Professional Full parameters & maximum detail

Current x Analysis

z-Score
P(X < x)

68-95-99.7 Rule

P(μ−σ < X < μ+σ)
P(μ−2σ < X < μ+2σ)
P(μ−3σ < X < μ+3σ)

Central Limit Theorem

CLT: SE for n=30 sample mean
Normal Distribution Properties

How to Use This Calculator

  1. Enter x, mean (μ), and std dev (σ) to get z-score and probabilities.
  2. Use Standard Normal tab to enter a z-score directly and get P(Zz), and the 68-95-99.7 check.
  3. Use General Normal tab to find P(a < X < b) between two values.
  4. Use Inverse tab to find x given a cumulative probability (e.g. find the 95th percentile).
  5. Professional tab shows the full 68-95-99.7 rule and CLT standard error.

Formula

z-score: z = (x − μ) / σ

PDF: f(x) = (1/σ√(2π)) × e^(−z²/2)

P(X < x): Φ(z) — cumulative normal CDF

Example

x=70, μ=65, σ=10: z = (70−65)/10 = 0.50. P(X < 70) = Φ(0.50) ≈ 0.6915 (69.15% of values below 70).

Frequently Asked Questions

  • The normal (Gaussian) distribution is a continuous probability distribution characterized by its bell-shaped curve, symmetric about the mean μ. Its probability density function is f(x) = (1/(σ√(2π))) × exp(−(x−μ)²/(2σ²)). Carl Friedrich Gauss formalized it in 1809 while modeling astronomical measurement errors. The normal distribution is central to statistics for several reasons. First, it naturally arises as the limiting distribution of sums of independent random variables (Central Limit Theorem). Second, many physical measurements are approximately normally distributed: adult heights, IQ scores, measurement errors, blood pressure readings. Third, it is mathematically tractable — its mean, median, and mode are all equal; it is completely characterized by just two parameters (μ and σ); and integrals of the normal PDF have known forms via the error function. Fourth, many statistical tests (t-test, ANOVA, regression) assume normally distributed residuals, making the normal distribution the backbone of classical inferential statistics.
  • The 68-95-99.7 rule (also called the empirical rule) states that for any normally distributed variable: approximately 68.27% of values fall within 1 standard deviation of the mean (between μ−σ and μ+σ); approximately 95.45% fall within 2 standard deviations (μ−2σ to μ+2σ); and approximately 99.73% fall within 3 standard deviations (μ−3σ to μ+3σ). These numbers are computed by integrating the normal PDF: P(|Z|<1) = 0.6827, P(|Z|<2) = 0.9545, P(|Z|<3) = 0.9973. Practical uses: in quality control, Six Sigma sets a ±6σ tolerance, meaning 2 defects per billion (far beyond 3σ); in finance, a '3-sigma event' is considered extremely rare but the 2008 financial crisis showed markets have fat tails that violate normality; in medicine, clinical reference ranges for lab tests are often set as mean ± 2σ, flagging the extreme 4.55% as abnormal. The rule is a quick mental calculator — if you know μ and σ, you immediately know where most of the data lies.
  • The Central Limit Theorem (CLT) states that the sum (or mean) of a large number of independent, identically distributed random variables approaches a normal distribution, regardless of the original distribution's shape. Pierre-Simon Laplace proved an early version in 1810; Aleksandr Lyapunov proved a more general version in 1901. Why does this explain the ubiquity of normality? Many real-world measurements are the sum of many small independent factors. Adult height is the sum of contributions from hundreds of genetic variants, each small and roughly independent. IQ test scores sum across many subtests. A product's weight is the sum of weights of hundreds of components. Measurement error accumulates from many small independent sources. In each case, summing many independent effects produces an approximately normal distribution. The CLT is why statisticians can analyze means of samples even when the population is non-normal: for sample sizes n ≥ 30, sample means are approximately normally distributed regardless of the population distribution. This justifies t-tests, z-tests, and ANOVA in practice.
  • Many real datasets are not normally distributed, and forcing a normal model can lead to badly wrong conclusions. Situations where normality fails: income and wealth data are heavily right-skewed — the mean greatly exceeds the median; financial returns have fat tails (kurtosis > 3, 'leptokurtic') — extreme events occur far more often than a normal model predicts (the 2008 crisis was called a '25-sigma event' under normal assumptions but actually has finite probability under fat-tailed models); count data (number of events, hospital admissions) must be non-negative integers — use Poisson or negative binomial; binary/categorical outcomes require logistic regression, not linear regression assuming normality; survival times are positive and right-skewed — use exponential, Weibull, or log-normal distributions; test scores bounded at 0 and 100 can be compressed at boundaries. Signs your data isn't normal: Q-Q plot deviates from a straight line, Shapiro-Wilk test rejects (especially with n < 50), histogram shows strong skewness or multiple modes. Always examine your data before assuming normality.
  • Several complementary approaches exist for checking normality. Graphical methods: a histogram should appear bell-shaped and symmetric; a Q-Q (quantile-quantile) plot compares your data quantiles to theoretical normal quantiles — points should lie on a straight diagonal line. Formal statistical tests: the Shapiro-Wilk test is the most powerful for small samples (n < 50) and widely recommended; the Kolmogorov-Smirnov test compares the empirical CDF to the normal CDF; the Anderson-Darling test gives more weight to the tails. These tests have a known limitation: with large samples (n > 200), they reject normality for trivially small deviations that have no practical consequence. With small samples, they have low power and may miss meaningful non-normality. Descriptive statistics help too: skewness near 0 and kurtosis near 3 (excess kurtosis near 0) suggest normality. In practice, most statisticians use a combination of the Q-Q plot and Shapiro-Wilk test. Remember that slight non-normality rarely matters for t-tests and ANOVA with moderate to large samples, because the CLT makes these tests robust.

Related Calculators

Sources & References (5)
  1. Gauss 1809 — Theoria Motus Corporum Coelestium (Gaussian distribution origins) — Perthes & Besser
  2. Laplace 1810 — Central Limit Theorem (proof of normal as limit) — Académie des Sciences
  3. OpenStax Statistics — Chapter 6: Normal Distribution — OpenStax
  4. NIST/SEMATECH Engineering Statistics Handbook — Normal Distribution — NIST
  5. MIT OCW 18.05 — Introduction to Probability and Statistics — MIT OpenCourseWare