Updated Statistics

z-Test Calculator

Run z tests for means (one-sample and two-sample with known σ) and z-tests for proportions (one-proportion and two-proportion). Get z statistic, p-value (one- or two-tailed), critical z, confidence interval, effect size, and a clear decision at your chosen α.

Mean z-Tests (σ known) Proportion z-Tests p-value + Critical z CI + Effect Size
Note: Mean z-tests assume population standard deviation (σ) is known and observations are independent. Proportion z-tests rely on a normal approximation, which is best when sample sizes are sufficiently large.

z-Test Tool

Choose a test tab, enter your inputs (summary stats or counts), and calculate. Switch tail type, α, and confidence level anytime.

This test compares a sample mean to μ₀ using a known population SD (σ). If σ is unknown, a t-test is usually more appropriate.
This two-sample z-test compares two independent means when σ₁ and σ₂ are known. If you only have sample SDs (s₁, s₂), use a two-sample t-test instead (Welch is usually safest).
One-proportion z-test checks whether an observed sample proportion p̂ = x/n differs from p₀. The test standard error uses p₀ (because the null assumes p = p₀). The confidence interval typically uses p̂.
Two-proportion z-test compares p₁ and p₂. Under H₀: p₁ = p₂ (and Δ₀ = 0), the hypothesis test uses a pooled proportion for the standard error. For confidence intervals, unpooled SE is typically preferred.
Confidence level Two-tailed α One-tailed α Typical z*
90% 0.10 0.05 ≈ 1.6449
95% 0.05 0.025 ≈ 1.9600
99% 0.01 0.005 ≈ 2.5758
Critical z values come from the standard normal distribution. Use them for confidence intervals and rejection thresholds.

What the z-Test Is

A z-test is a hypothesis test that uses the standard normal distribution (the z distribution) to decide whether a statistic (like a mean or a proportion) is consistent with a null hypothesis. The heart of every z-test is the same idea: take what you observed, subtract what the null claims should happen, then scale that difference by a standard error so the result is measured in “standard deviation units.”

This z-Test Calculator covers the four most common cases: (1) one-sample z test for a mean (with known population standard deviation σ), (2) two-sample z test for independent means (with known σ₁ and σ₂), (3) one-proportion z-test, and (4) two-proportion z-test. Each mode reports the same decision-friendly outputs: z statistic, p-value, critical z, confidence interval, and effect size.

When a z-Test Is Appropriate

In practice, z-tests are most appropriate in two situations. First, when testing a mean and the population SD (σ) is truly known from a stable process, historical calibration, or a trusted specification. Second, when testing proportions, where the normal approximation to the binomial distribution is reasonable due to sufficiently large sample sizes.

If you do not know σ for mean tests, the t distribution is usually the right tool and a t-test is the typical default. For small samples, heavy outliers, or severely skewed data, you may need a different method altogether—especially if assumptions like independence or approximate normality are questionable.

z-Test Modes Supported by This Calculator

Mode Tests Null hypothesis Common use
One-sample mean z Mean vs μ₀ H₀: μ = μ₀ Compare an average to a target when σ is known
Two-sample mean z Mean difference H₀: μ₁ − μ₂ = Δ₀ Compare two independent means when σ₁ and σ₂ are known
One-proportion z Proportion vs p₀ H₀: p = p₀ Check if a rate differs from a benchmark
Two-proportion z Difference in proportions H₀: p₁ − p₂ = Δ₀ Compare conversion rates, defect rates, or success rates

The Core z-Test Structure

Every z-test reduces to:

z = (estimate − null value) / SE

Here, estimate might be a sample mean (x̄), a difference of means (x̄₁−x̄₂), a sample proportion (p̂), or a difference in proportions (p̂₁−p̂₂). The standard error (SE) measures the typical variation of that estimate under repeated sampling. Under the null hypothesis (and assumptions), z approximately follows a standard normal distribution.

Mean z-Tests With Known Population Standard Deviation

One-sample mean z-test

The one-sample mean z-test compares a sample mean x̄ to a hypothesized mean μ₀ when σ is known:

z = (x̄ − μ₀) / (σ / √n)

If your process SD is truly known, this is a clean test. If σ is estimated from the same sample, the t-test is generally more accurate.

Two-sample mean z-test

The two-sample mean z-test compares two independent group means when σ₁ and σ₂ are known:

z = ((x̄₁ − x̄₂) − Δ₀) / √(σ₁²/n₁ + σ₂²/n₂)

This is common in quality control or controlled measurement systems. In many real datasets, σ₁ and σ₂ are not known, so the two-sample t-test (Welch) is used instead.

Proportion z-Tests

One-proportion z-test

The one-proportion z-test compares an observed sample proportion p̂ = x/n to a hypothesized proportion p₀. The hypothesis test uses p₀ in the standard error because the null assumes p = p₀:

z = (p̂ − p₀) / √(p₀(1−p₀)/n)

For the confidence interval, many workflows use p̂ in the SE (because you’re estimating p), which is what this calculator reports by default.

Two-proportion z-test

The two-proportion z-test compares p̂₁ = x₁/n₁ and p̂₂ = x₂/n₂. Under H₀: p₁ = p₂ (especially when Δ₀ = 0), the standard error for the hypothesis test uses a pooled estimate:

p_pool = (x₁ + x₂) / (n₁ + n₂) and z = ((p̂₁ − p̂₂) − Δ₀) / √(p_pool(1−p_pool)(1/n₁ + 1/n₂))

For confidence intervals, an unpooled SE is typically preferred because you are estimating p₁ and p₂ separately:

SE_CI = √(p̂₁(1−p̂₁)/n₁ + p̂₂(1−p̂₂)/n₂)

How to Interpret Tail Type, p-Value, and α

A two-tailed test asks whether the true parameter differs in either direction (greater or less). A one-tailed test asks whether the parameter differs in a single specified direction. Tail choice should be made before analyzing results.

The p-value is computed from the standard normal distribution: it is the probability of a z-statistic at least as extreme as the observed value, assuming the null is true. Your significance level α is the cutoff you choose (commonly 0.05). If p ≤ α you reject H₀; if p > α you fail to reject H₀.

Confidence Intervals and Critical z

A confidence interval provides a range of plausible values for the parameter based on your sample. The standard form is:

CI = estimate ± z* · SE

The critical value z* depends on your confidence level (for a two-sided interval, z* corresponds to 1 − α/2). This calculator shows z* inside each test mode and also provides a dedicated Critical z tab for quick reference.

Effect Size for z-Tests

Statistical significance answers “is there evidence of a difference?” but not “how big is the difference?” That’s why effect sizes are useful. This tool reports a simple, interpretable effect size per mode:

  • Mean z-tests: a standardized difference similar to Cohen’s d using σ (for one-sample, d = (x̄−μ₀)/σ).
  • Two-sample means: a standardized difference using an average σ scale (a practical summary for magnitude).
  • Proportions: Cohen’s h, an arcsine-based effect size that behaves well near 0 and 1.

Even when p-values are small, effect size helps you decide whether a difference is meaningful in real terms (e.g., operational impact, business value, or clinical relevance).

Step-by-Step: How to Use the z-Test Calculator

Quick workflow

  1. Set tail type (two-tailed or one-tailed) and choose α and confidence level.
  2. Select the correct tab: Mean (one-sample or two-sample) or Proportion (one or two).
  3. Enter summary statistics (means/σ/n) or counts (x and n), then click Calculate.
  4. Review z, p-value, critical z, CI, and decision.
  5. Use the effect size output to understand how large the difference is, not just whether it is statistically detectable.

FAQ

z-Test Calculator FAQs

Practical questions about z-tests for means and proportions, assumptions, and interpretation.

A z-test is a hypothesis test based on the standard normal (z) distribution. It is commonly used when population standard deviation is known (for mean tests) or when testing proportions with sufficiently large sample sizes.

Use a z-test for means when the population SD (σ) is known (or n is very large and σ is well-estimated). Use a t-test when σ is unknown and you rely on the sample SD.

Two-tailed tests detect differences in either direction. One-tailed tests detect differences in a specified direction only (greater-than or less-than).

The p-value is the probability (assuming the null hypothesis is true) of observing a z statistic at least as extreme as the one computed from your data.

For two-tailed tests, if the null value is outside the confidence interval (at the same confidence level), the test typically rejects the null at the matching α.

Proportion z-tests assume independent trials and that sample sizes are large enough for a normal approximation (often checked with np and n(1−p) being sufficiently large).

Under the null hypothesis p₁ = p₂, pooling combines successes to estimate a common proportion used in the standard error for the hypothesis test.

For mean z-tests, a common standardized effect is Cohen’s d using σ. For proportions, Cohen’s h is often used to express the difference between proportions on an arcsine scale.