Updated Math

Bayes’ Theorem Calculator

Update probability with evidence. Enter a prior, a likelihood, and a false-positive rate—then get the posterior P(A|B), the evidence probability P(B), and a clear breakdown in decimal and percent.

Posterior P(A|B) Evidence P(B) Base Rate A History + CSV

Bayesian Probability Evaluator

Compute P(A|B) using Bayes’ theorem with a simple two-case model: A vs ¬A. Save results to history and export to CSV.

How common A is before seeing evidence B (base rate).
How often evidence appears when A is true (e.g., sensitivity).
How often evidence appears when A is false (e.g., 1 − specificity).

Quick Steps

  1. Choose whether you want to enter values as percents or decimals.
  2. Enter P(A), P(B|A), and P(B|¬A) using the same format.
  3. Press Calculate to get P(A|B), P(B), and P(¬A|B).
  4. Use History to compare scenarios and export results to CSV.
Tip: A common “positive test” setup is A = condition present and B = test positive. Then P(A|B) is the probability you truly have the condition given a positive result.
Concept Symbol Plain Meaning Example Interpretation
Prior P(A) How likely A is before evidence Prevalence of a condition
Likelihood P(B|A) How often B happens if A is true Sensitivity: positive if condition present
False Positive P(B|¬A) How often B happens if A is false 1 − specificity: positive if condition absent
Evidence P(B) Overall probability of observing B Chance of a positive test in the population
Posterior P(A|B) Updated probability of A after seeing B Chance condition is present given positive test
Complement ¬A “Not A” No condition present

Formula Breakdown

  1. Compute the complement: P(¬A) = 1 − P(A).
  2. Compute evidence: P(B) = P(B|A)P(A) + P(B|¬A)P(¬A).
  3. Compute posterior: P(A|B) = P(B|A)P(A) ÷ P(B).
  4. Compute the alternative: P(¬A|B) = 1 − P(A|B).
Valid ranges: probabilities must be between 0 and 1 (decimal) or 0 and 100 (percent). If P(B) becomes 0, the posterior is undefined and the calculator will show an error.
Your Bayes calculations will appear here after you run the calculator.

What Is Bayes’ Theorem and Why Do People Use It?

Bayes’ theorem is a simple rule for updating probabilities when you learn new information. It’s the math version of changing your mind responsibly: you start with what you believed before (a prior), you consider how strongly the new evidence supports or contradicts that belief (a likelihood), and you end with an updated belief (a posterior).

The reason Bayes’ theorem shows up everywhere is that real decisions rarely happen in a vacuum. You hear a claim, see a test result, read a signal, or notice a pattern—then you ask: “How should this change what I think is true?” Bayes gives you a structured answer, instead of relying on gut instinct alone.

How Does Bayes’ Theorem Work in Plain Language?

Imagine two possibilities: A is true, or A is not true (¬A). Then you observe evidence B. Bayes’ theorem compares two pathways to seeing B:

  • B happening because A is true (that’s P(B|A) times how common A is).
  • B happening even when A is false (that’s P(B|¬A) times how common ¬A is).

If B is much more likely when A is true than when A is false, then observing B should increase your confidence in A. But how much it increases depends heavily on how common A was to begin with. That’s the heart of Bayes: strong evidence matters, but base rates matter too.

The Core Formula: P(A|B) Without the Mystery

Bayes’ theorem is often written as:

P(A|B) = P(B|A) × P(A) ÷ P(B)

The piece that confuses many people is P(B), the probability of the evidence. In a simple two-case world (A vs ¬A), you can compute it directly:

P(B) = P(B|A)P(A) + P(B|¬A)P(¬A)

In other words, P(B) is the “total” chance of seeing B across all possibilities. It’s what makes the posterior properly normalized—so it stays within 0 and 1 and reflects the proportion of B-events that come from A.

Why Do People Mix Up P(A|B) and P(B|A)?

It’s common to confuse the probability of A given B with the probability of B given A because the notation looks symmetric. But these probabilities answer different questions:

  • P(B|A) asks: “If A is true, how likely is the evidence?”
  • P(A|B) asks: “If the evidence happened, how likely is A?”

A helpful mental reset is to read the bar as “given.” The part on the right of the bar is what you assume or observe, and the part on the left is what you’re trying to learn.

What Is the “Base Rate,” and When Does It Change Everything?

The base rate is the prior probability P(A). It’s the starting point before you see evidence. People often underestimate how powerful the base rate is, especially when A is rare. When a condition, fraud type, or event is uncommon, even decent evidence can produce a lot of false alarms.

Bayes’ theorem explains why. If only 1% of a population has a condition, then 99% does not. A test with a small false-positive rate can still generate many false positives simply because it is applied to a huge number of people who don’t have the condition. The posterior P(A|B) tells you what you actually want to know: how likely A is given that you saw the evidence.

How to Use This Bayes’ Theorem Calculator

Step 1: Pick your input format

You can enter values as percentages (0–100) or decimals (0–1). Choose one format and keep it consistent for all three inputs. The calculator will display results in both forms so you can interpret them easily.

Step 2: Enter the prior P(A)

P(A) is how likely A is before evidence. In health, it might be prevalence. In cybersecurity, it might be how often a certain type of incident occurs. In email, it might be the fraction of emails that are spam.

Step 3: Enter the likelihood P(B|A)

This describes how often you see evidence B if A is true. If B is a “positive test,” then P(B|A) is the sensitivity. If B is a warning flag, it’s how often that warning happens when the event truly exists.

Step 4: Enter the false-positive rate P(B|¬A)

This describes how often you see evidence B when A is false. In testing terms, it’s 1 minus specificity. In general decision terms, it’s how “noisy” your evidence is.

Step 5: Interpret P(A|B)

The posterior P(A|B) is the updated probability that A is true after observing B. The calculator also returns P(B) (how common the evidence is overall) and P(¬A|B) (how likely it is that A is false even though B occurred).

Worked Example: What If a Test Is “90% Accurate”?

“Accuracy” is a vague word, so Bayes forces you to be specific. Suppose A is a condition and B is a positive test:

  • P(A) = 1% prevalence
  • P(B|A) = 90% sensitivity
  • P(B|¬A) = 5% false-positive rate

Even though the test is strong for true cases and fairly low on false positives, the condition is rare. The result is often surprising: the probability of having the condition given a positive test may be much lower than 90%. That’s not because the test is “bad,” but because positives can come from both true cases and false positives—and there are many more non-cases to start with.

How Can Bayes Help With Spam Filtering?

In a spam filter setup, A might mean “email is spam,” and B might mean “email contains a suspicious keyword.” The prior P(A) reflects how much spam you typically receive, P(B|A) reflects how often spammers use the keyword, and P(B|¬A) reflects how often legitimate emails also contain it. A good filter combines many signals (many B events), but each one is still a Bayesian update at heart: evidence that is common in spam and rare in normal mail pushes the posterior upward.

Why Does P(B) Matter More Than People Expect?

P(B) is the denominator, so it scales the final result. If evidence B is common across both A and ¬A, then B doesn’t tell you much. If evidence B is rare overall but strongly associated with A, then B is informative. Bayes captures this directly: evidence that is ubiquitous isn’t strong evidence.

What If You Want to Update Multiple Times?

Many real situations involve repeated evidence. You can treat today’s posterior as tomorrow’s prior and update again when new evidence arrives. This “posterior becomes prior” workflow mirrors how you learn in real life: each new observation reshapes your belief. The calculator’s History tab can help you compare scenarios and understand how changing one input—like the base rate—changes your conclusion.

How to Avoid Common Bayes Mistakes

Mixing formats (percent vs decimal)

A probability of 0.9 equals 90%. If you accidentally enter 90 as a decimal value, you’ll break the model. Choose an input format and keep it consistent.

Forgetting the complement ¬A

Bayes needs both sides of the world: what happens if A is true and what happens if A is false. If you ignore ¬A, you’ll overestimate certainty.

Assuming “high sensitivity” means “high posterior”

Sensitivity alone does not determine P(A|B). The false-positive rate and the prior can dominate, especially when A is rare.

Using vague “accuracy” claims

If someone says a test is “95% accurate,” ask: is that sensitivity, specificity, or something else? Bayes requires P(B|A) and P(B|¬A), not a single marketing number.

What If You Only Know Specificity Instead of False Positives?

Specificity is P(¬B|¬A), the chance of a negative result when A is false. If you have specificity, you can get the false-positive rate as:

P(B|¬A) = 1 − specificity

For example, a specificity of 97% means a false-positive rate of 3%. Plug that value into the calculator to compute the posterior.

Bayes in Decisions: “What If the Prior Is Uncertain?”

Sometimes you don’t know the true base rate. In that case, it’s smart to explore a range of priors. Try a low, medium, and high P(A) to see how sensitive your conclusion is. If the posterior changes dramatically across plausible priors, your decision may need more data before you act confidently.

Why This Calculator Uses a Two-Case Model

This tool models a common scenario where A has two states: true or false. Many practical problems fit that structure, like “disease vs no disease,” “fraud vs no fraud,” or “spam vs not spam.” More complex situations with multiple categories can still be handled with Bayes, but the formula expands to include more cases in the evidence term.

How to Sanity-Check Your Result

  • If P(B|A) equals P(B|¬A), the evidence doesn’t differentiate A from ¬A, so the posterior should equal the prior.
  • If P(B|A) is high and P(B|¬A) is near zero, a B observation should push the posterior strongly upward.
  • If P(A) is extremely small, a single piece of evidence rarely makes the posterior extremely large unless false positives are tiny.

Limitations and Safe Use Notes

Bayes’ theorem is only as reliable as the assumptions and inputs you provide. If probabilities are estimated poorly, or if evidence is not independent in the way you assume, the posterior can mislead. Use the calculator as a planning and reasoning tool, and always check that definitions match your real-world situation.

FAQ

Bayes’ Theorem Calculator – Frequently Asked Questions

Common questions about priors, likelihoods, false positives, and how to interpret posterior probability.

Bayes’ theorem calculates an updated probability (the posterior) after you observe new evidence. It answers questions like “Given that B happened, how likely is A?” written as P(A|B).

You combine three inputs: the prior P(A), the likelihood P(B|A), and the false-positive rate P(B|¬A). First compute P(B) = P(B|A)P(A) + P(B|¬A)P(¬A). Then compute P(A|B) = P(B|A)P(A) ÷ P(B).

They look similar but mean different things. P(A|B) is the probability of A given B. P(B|A) is the probability of B given A. Bayes’ theorem connects them using the base rate P(A) and the overall probability of B.

Typically: prevalence P(A) (how common the condition is), sensitivity P(B|A) (test positive if condition is present), and false positive rate P(B|¬A) (test positive when condition is absent). The calculator then returns the probability the condition is present given a positive test.

Because it anchors the update. Even a strong test can produce many false positives when the condition is rare. Bayes’ theorem forces the prior P(A) to be included so the posterior reflects real-world prevalence.

P(B) is the overall probability of observing evidence B. It’s the weighted sum of getting B when A is true and getting B when A is false. It normalizes the result so the posterior stays between 0 and 1.

Yes. Switch the input format to Percent and enter values like 2, 85, or 0.5 depending on your preference. The calculator converts them correctly and shows results in both decimal and percent forms.

Bayes’ theorem needs some way to estimate how often evidence appears when A is false. If you don’t have P(B|¬A), you can’t compute a reliable posterior from this simple two-case model. Consider getting specificity/false-positive data or using a more detailed model.

It’s your updated belief after seeing evidence. Prior is what you believed before, likelihood describes how evidence behaves, and posterior is what you should believe after combining them.

No. It’s used in spam filtering, A/B testing intuition, risk assessment, diagnostics, forecasting, machine learning, and everyday decision-making whenever you update beliefs based on new information.

Results are for education and planning. Always verify that your inputs represent the correct real-world meanings (prior, likelihood, and false-positive rate) before using outputs for medical, financial, safety, or compliance decisions.