Updated Statistics

Geometric Distribution Calculator

Calculate PMF (P(X=k)), CDF (P(X≤k)), tail probabilities, probability between two values, and inverse CDF (quantiles). Supports both conventions: trials until first success (k≥1) and failures before first success (k≥0).

Two conventions PMF, CDF & tails Between Quantiles

Geometric Distribution Tool

Enter success probability p, choose the geometric definition, then pick a mode (PMF, CDF/tails, between, quantile, or table).

PMF formulas: Trials (k≥1): P(X=k)=(1−p)^(k−1)·p. Failures (k≥0): P(X=k)=(1−p)^k·p.
CDF shortcuts: Trials: P(X≤k)=1−(1−p)^k. Failures: P(X≤k)=1−(1−p)^(k+1). Tails are complements using the correct “≤” boundary for your definition.
Between probability is computed with CDF differences: P(a ≤ X ≤ b) = CDF(b) − CDF(a−1) (after converting your chosen interval into integer bounds).
Inverse CDF returns an integer cutoff. Internally we use log-based formulas (and safe bounds) to avoid slow iteration.
The table lists k, PMF, CDF, and right tail from the minimum valid k up to max k. If you enter n, you’ll also see expected counts: n×P(X=k).

What the Geometric Distribution Calculator Does

The Geometric Distribution Calculator gives you fast, exact probabilities for “first success” questions. If you repeat the same yes/no experiment over and over—each time with the same chance of success p—the geometric distribution models how long you wait until the first success arrives.

This tool supports both standard textbook conventions: (1) trials until first success (values start at k=1) and (2) failures before first success (values start at k=0). They describe the same process; they’re simply shifted by 1. The calculator keeps the definitions clear so you don’t get off-by-one errors.

Two Common Definitions

You will see geometric distribution defined in two ways. The difference is what you count: do you count trials or failures?

Definition Meaning of X Support PMF CDF Mean Variance
Trials # of trials until first success k = 1, 2, 3, … (1−p)^(k−1)·p 1−(1−p)^k 1/p (1−p)/p²
Failures # of failures before first success k = 0, 1, 2, … (1−p)^k·p 1−(1−p)^(k+1) (1−p)/p (1−p)/p²

Notice that the variance is the same in both forms, while the mean differs by 1 because the random variable is shifted. If you define X as trials, then “1 trial” is the earliest possible success. If you define X as failures, then “0 failures” is the earliest possible success. This tool lets you toggle the definition so the formulas and valid k range always match what you mean.

When the Geometric Model Is a Good Fit

Geometric is built on a simple repeated-trials story: each trial is independent, each trial has the same success probability p, and the experiment stops as soon as the first success happens. This makes it ideal for situations like:

  • Quality control: how many items do you inspect until you find one defective (or one acceptable)?
  • Customer conversion: how many visits until a user makes the first purchase?
  • Sales outreach: how many calls until the first positive response?
  • Reliability testing: how many attempts until a system successfully completes a task?
  • Simple games/chance: how many tries until you roll a 6, draw a specific card, etc.

If p is stable and trials are independent, geometric is usually the cleanest model for “waiting time until first success.” If p changes from trial to trial (learning effects, fatigue, seasonality, changing conditions), geometric can still be useful as an approximation, but the exact probabilities may be off—especially in the tails.

PMF: Probability of Exactly k

The PMF gives the probability of landing on an exact value. For geometric, it has a very intuitive structure: “fail several times in a row, then succeed.”

Trials (k≥1): P(X=k) = (1−p)^(k−1) · p
Failures (k≥0): P(X=k) = (1−p)^k · p

In both cases, (1−p) is the probability of failure, often written as q. So the PMF becomes q^(…)*p. That’s the “string of failures” times the final success. This calculator also reports CDF and right-tail probability at the same k so you can immediately answer “at most” and “at least” questions without switching modes.

CDF and Tail Probabilities

The CDF answers threshold questions like “What’s the chance we succeed by the 5th try?” or “What’s the chance we have at most 3 failures before a success?”

The best part is that geometric has a closed-form CDF—no long sums required:

Trials: P(X ≤ k) = 1 − (1−p)^k
Failures: P(X ≤ k) = 1 − (1−p)^(k+1)

Tail probabilities (“at least” and “more than”) are computed as complements. The key is using the correct boundary: P(X ≥ k) = 1 − P(X ≤ k−1) (when k is valid). The calculator handles these boundaries based on your chosen definition so the result matches the words on the screen: ≤, <, ≥, or >.

Probability Between Two Values

Range questions come up often in planning and monitoring: “Probability the first success occurs between trial 3 and trial 10,” or “Probability the number of failures is between 0 and 4.”

The calculator computes these with a reliable method:

P(a ≤ X ≤ b) = CDF(b) − CDF(a−1)

For exclusive or half-open intervals, it converts your range into integer bounds and applies the same CDF difference idea. This avoids a common mistake: summing many PMF terms manually and accidentally skipping or including an endpoint.

Inverse CDF: Finding a Cutoff (Quantile)

Sometimes you know the probability you want and you need the k threshold. This is a quantile problem: “Find the smallest k such that P(X ≤ k) ≥ 0.95.” In practice, this is used for service levels and planning:

  • Capacity: “How many attempts should we allow so 95% of users succeed?”
  • Monitoring: “What’s the 99th percentile of failures before success?”
  • Decision rules: “If success hasn’t happened by k, treat it as unusual.”

The calculator uses log-based rearrangements of the CDF to compute k quickly and then verifies the result by checking the CDF and right-tail values at k. Because geometric is discrete, the quantile is always an integer, and “smallest k” is the correct interpretation for typical percentile cutoffs.

The Memoryless Property

The geometric distribution is famous for being memoryless. In plain language: if you’ve already failed many times, the probability distribution of how many more trials you need looks the same as it did at the beginning.

For the trials definition, one statement of memorylessness is:

P(X > s + t | X > s) = P(X > t)

This property is only true when each trial is independent and the success probability p stays constant. If the process “learns” (p improves) or “wears out” (p worsens), geometric will not be memoryless, and that can be a clue that you need a different model.

Geometric vs Negative Binomial

The geometric distribution can be seen as a special case of the negative binomial distribution. Negative binomial models the number of trials (or failures) until you achieve r successes. If you set r=1, you get geometric. If your situation requires “time until the 3rd success,” geometric is too narrow and negative binomial is usually the correct extension.

Common Mistakes This Calculator Helps Prevent

  • Using the wrong support: trials definition starts at k=1, failures definition starts at k=0. If you put k=0 in the trials definition, the probability should be 0.
  • Off-by-one in the CDF: trials CDF uses exponent k, failures CDF uses exponent (k+1).
  • Confusing “at least k” with “more than k”: ≥ and > differ by one unit for discrete distributions.
  • Entering p as a percent but treating it as decimal: 20% is 0.20, not 20.

FAQ

Geometric Distribution Calculator FAQs

Quick answers about p, the two geometric definitions, tails, and when to use this model.

The geometric distribution models the number of Bernoulli trials needed to get the first success (or, in another common definition, the number of failures before the first success) when each trial has the same success probability p.

p is the probability of success on each independent trial. It must be between 0 and 1 (exclusive for most practical cases).

Some textbooks define X as the number of trials until the first success (k≥1). Others define X as the number of failures before the first success (k≥0). Both are geometric; they’re just shifted by 1.

For trials-until-success: P(X=k)=(1−p)^(k−1)·p (k≥1). For failures-before-success: P(X=k)=(1−p)^k·p (k≥0).

Use the CDF. Trials definition: P(X≤k)=1−(1−p)^k. Failures definition: P(X≤k)=1−(1−p)^(k+1).

Geometric is memoryless: the probability you need at least m more trials does not depend on how many failures happened already (when trials are independent with constant p).

Trials definition: E[X]=1/p and Var(X)=(1−p)/p^2. Failures definition: E[X]=(1−p)/p and Var(X)=(1−p)/p^2.

If p changes from trial to trial, trials aren’t independent, or success/failure isn’t binary, geometric may be a poor fit.

This calculator assumes independent trials with a constant success probability p. If p varies across trials or trials are dependent, results may not match real-world data.