Bayes' Theorem Calculator
Calculate posterior probability using Bayes' theorem. Supports standard form, medical test (sensitivity/specificity/prevalence), spam filter, and sequential Bayesian updates with likelihood ratios.
P(A|B) — Posterior
—
P(B) — Marginal Probability of B —
Posterior Odds —
Likelihood Ratio (LR+) —
Extended More scenarios, charts & detailed breakdown ▾
Posterior P(A|B)
—
P(B) —
Belief Update (prior → posterior) —
Professional Full parameters & maximum detail ▾
Sequential Updates
Posterior after 1 positive test —
Posterior after 2 positive tests —
Posterior after 3 positive tests —
Odds & Likelihood Ratios
Prior Odds —
LR+ (positive test) —
LR− (negative test) —
Bayes Factor (LR+) —
How to Use This Calculator
- Enter P(A) — the prior probability (e.g. disease prevalence = 0.01).
- Enter P(B|A) — the likelihood (e.g. test sensitivity = 0.95).
- Enter P(B|¬A) — the false positive rate (e.g. 1 − specificity = 0.05).
- Read the posterior P(A|B) and the marginal probability P(B).
- Use the Medical Test tab for full PPV/NPV analysis with sensitivity and specificity inputs.
- Professional tab shows sequential Bayesian updates across multiple tests.
Formula
Bayes' theorem: P(A|B) = P(B|A) × P(A) / P(B)
P(B) = P(B|A) × P(A) + P(B|¬A) × (1−P(A))
Likelihood ratio: LR+ = P(B|A) / P(B|¬A)
Example
Disease prevalence 1%, sensitivity 95%, false positive rate 5%: P(B) = 0.95×0.01 + 0.05×0.99 = 0.059. Posterior = 0.95×0.01 / 0.059 ≈ 16.1% — most positives are false positives.
Frequently Asked Questions
- Bayes' theorem is a mathematical formula for updating probability beliefs in light of new evidence. It states: P(A|B) = P(B|A) × P(A) / P(B), where P(A) is the prior probability (your belief before seeing evidence B), P(B|A) is the likelihood (how probable the evidence is given A), P(B) is the marginal probability of the evidence, and P(A|B) is the posterior probability (your updated belief after seeing B). Thomas Bayes derived the theorem posthumously published in 1763; Pierre-Simon Laplace independently discovered it in 1774 and stated it in its modern form. Bayes' theorem is foundational to statistics, machine learning, medical diagnosis, spam filtering, natural language processing, and scientific inference. Its importance lies in formalizing how rational agents should update their beliefs — moving from prior beliefs to posterior beliefs as evidence accumulates. The theorem is also the basis for Bayesian statistics, an entire paradigm of statistical inference that treats probability as a degree of belief rather than a long-run frequency.
- Medical testing is perhaps the most intuitive application of Bayes' theorem. A test has sensitivity (P(positive|disease) — true positive rate) and specificity (P(negative|no disease) — true negative rate). But what you actually want to know after a positive test is P(disease|positive) — the positive predictive value (PPV). Bayes' theorem connects these: PPV = (sensitivity × prevalence) / [(sensitivity × prevalence) + (1−specificity) × (1−prevalence)]. The prevalence (prior probability) is crucial. Even a very accurate test (95% sensitivity, 95% specificity) applied to a rare disease (1% prevalence) gives a PPV of only about 16% — most positive tests are false positives. This is mathematically inevitable: with only 1 true case per 100 people and a 5% false positive rate, you expect roughly 5 false positives for every 1 true positive. This reality has important implications for mass screening programs, where the base rate is typically very low and false positives can cause significant psychological and economic harm.
- The base rate fallacy occurs when people ignore the prior probability (base rate) of an event and focus only on specific case information. A classic example: a cab was involved in a hit-and-run at night. A witness identifies it as blue. The witness correctly identifies cab colors 80% of the time. If 85% of cabs are green and 15% are blue, what's the probability the cab was actually blue? Most people say 80% — but Bayes' theorem gives the correct answer: P(blue|witness says blue) = (0.80 × 0.15) / (0.80 × 0.15 + 0.20 × 0.85) ≈ 41%. The high proportion of green cabs overwhelms the witness's evidence. Doctors fall into this trap when they believe a positive test means near-certain disease, ignoring disease prevalence. Judges and juries commit this error with forensic evidence. Security analysts commit it with intrusion detection alerts. Avoiding it requires explicitly multiplying the prior odds by the likelihood ratio to get posterior odds — which is exactly what Bayes' theorem computes.
- These are two foundational interpretations of probability with different philosophies. Frequentists define probability as the long-run frequency of an event in repeated experiments. Probability is a property of the real world — you can only assign probabilities to events that can be repeated. The null hypothesis is either true or false; you cannot assign a probability to it. Inference focuses on p-values and confidence intervals for procedures that control error rates. Bayesians define probability as a degree of belief, which can be assigned to any proposition, including hypotheses. Prior beliefs are updated via Bayes' theorem to produce posterior beliefs. Parameters have probability distributions reflecting uncertainty. Inference produces posterior distributions and credible intervals. Bayesian methods naturally handle small samples, incorporate domain knowledge through priors, and produce directly interpretable probability statements about hypotheses. Frequentist methods are computationally simpler and require no prior specification. Modern Bayesian computation (MCMC, variational inference) has made Bayesian methods practical even for complex models, and they now dominate machine learning and artificial intelligence.
- This is a perfect illustration of the base rate fallacy resolved by Bayes' theorem. Consider HIV testing in a population where prevalence is 0.1% (1 in 1000). A high-quality HIV test has sensitivity ≈ 99.7% and specificity ≈ 99.9%. Apply Bayes' theorem: of 1,000,000 people tested, about 1,000 have HIV. The test correctly identifies 997 of them (true positives). Of the 999,000 HIV-negative people, the test incorrectly flags 0.1% = 999 people as positive (false positives). So of 997 + 999 = 1,996 positive tests, only 997 are true positives — a PPV of about 50%. The majority of positive tests are false positives despite the test being highly accurate, purely because the disease is rare. This is why clinical guidelines for low-prevalence screening recommend confirmatory testing after a positive result. In high-risk populations where prevalence might be 5–10%, the PPV rises dramatically — reinforcing that Bayes' theorem, not just test accuracy, determines what a positive result means.
Related Calculators
Sources & References (5) ▾
- Bayes 1763 — 'An Essay towards solving a Problem in the Doctrine of Chances' — Philosophical Transactions of the Royal Society
- Laplace 1774 — Mémoire sur la probabilité des causes par les événements (independent derivation) — Académie des Sciences
- Berry — 'Bayesian Statistics' (introduction to Bayesian reasoning) — Duxbury Press
- Stanford Encyclopedia of Philosophy — Bayes' Theorem — Stanford University
- Khan Academy — Bayesian Statistics and Conditional Probability — Khan Academy