Overview

The Mann-Whitney U test, also known as the Wilcoxon rank-sum test, is a non-parametric test used to determine whether there is a significant difference between the distributions of two independent samples. Unlike the t-test, it does not assume normal distribution of the data.


What Does ‘Non-Parametric’ Mean?

Non-parametric tests, like the Mann-Whitney U test, do not assume a specific distribution for the data. This contrasts with parametric tests, such as the t-test, which assume the data follows a normal distribution. Non-parametric tests are more flexible and can be used when the assumptions of parametric tests are not met, particularly when dealing with skewed distributions or ordinal data.


Why is the t-test Parametric?

The t-test is considered a parametric test because it relies on assumptions about the parameters of the population distribution from which the sample is drawn. Specifically, the t-test assumes that:

  1. The data are drawn from populations that follow a normal distribution.
  2. The variances of the two populations are equal (in the case of a two-sample t-test).
  3. The data are measured on an interval or ratio scale.

These assumptions allow the t-test to make inferences about the population mean and to use the t-distribution to calculate the probability of observing the data given the null hypothesis.


Why is the Mann-Whitney U Test Non-Parametric?

The Mann-Whitney U test is considered non-parametric because it does not make any assumptions about the underlying population distribution. Instead, it evaluates whether one sample tends to have larger values than the other by ranking all the data points and comparing the sums of these ranks between the two groups. This rank-based method makes the test robust to non-normal distributions and outliers, allowing it to be used in a wider range of situations.


Let’s go through the step-by-step process to perform a Mann-Whitney U test using the following data for two groups (e.g., Group A and Group B):

\[ \begin{array}{|c|c|} \hline \text{Group A} & \text{Group B} \\ \hline 1.1 & 2.5 \\ 2.3 & 3.1 \\ 2.5 & 3.6 \\ 3.8 & 4.0 \\ 4.1 & 4.2 \\ \hline \end{array} \]


Step 1: Combine and Rank the Data

Combine the data from both groups and rank them from the smallest to the largest. If there are tied ranks, assign the average rank to the tied values.

\[ \begin{array}{|c|c|c|} \hline \text{Value} & \text{Group} & \text{Rank} \\ \hline 1.1 & A & 1 \\ 2.3 & A & 2 \\ 2.5 & A & 3 \\ 2.5 & B & 3 \\ 3.1 & B & 5 \\ 3.6 & B & 6 \\ 3.8 & A & 7 \\ 4.0 & B & 8 \\ 4.1 & A & 9 \\ 4.2 & B & 10 \\ \hline \end{array} \]


Step 2: Sum the Ranks for Each Group

Calculate the sum of the ranks for each group:

\[ R_A = 1 + 2 + 3 + 7 + 9 = 22 \]

\[ R_B = 3 + 5 + 6 + 8 + 10 = 32 \]


Step 3: Calculate the U Statistics

The U statistic for each group is calculated using the following formulas:

\[ U_A = n_A n_B + \frac{n_A (n_A + 1)}{2} - R_A \]

\[ U_B = n_A n_B + \frac{n_B (n_B + 1)}{2} - R_B \]

where \(n_A\) and \(n_B\) are the sample sizes of Group A and Group B, respectively.

\[ U_A = 5 \times 5 + \frac{5 \times 6}{2} - 22 = 25 + 15 - 22 = 18 \]

\[ U_B = 5 \times 5 + \frac{5 \times 6}{2} - 32 = 25 + 15 - 32 = 8 \]


Step 4: Determine the Smaller U Value

The smaller U value is used for the test statistic:

\[ U = \min(U_A, U_B) = \min(18, 8) = 8 \]


Step 5: Calculate the Mean and Standard Deviation of U

For large samples, the distribution of U can be approximated by a normal distribution with the following mean (\(\mu_U\)) and standard deviation (\(\sigma_U\)):

\[ \mu_U = \frac{n_A n_B}{2} \]

\[ \sigma_U = \sqrt{\frac{n_A n_B (n_A + n_B + 1)}{12}} \]

\[ \mu_U = \frac{5 \times 5}{2} = 12.5 \]

\[ \sigma_U = \sqrt{\frac{5 \times 5 \times 11}{12}} = \sqrt{22.92} = 4.79 \]


Step 6: Calculate the Z-Score

Convert the U value to a Z-score:

\[ Z = \frac{U - \mu_U}{\sigma_U} = \frac{8 - 12.5}{4.79} = -0.94 \]


Step 7: Determine the P-Value

Use the Z-score to find the corresponding P-value from the standard normal distribution. For a two-tailed test, double the one-tailed P-value.

For \(Z = -0.94\), the P-value is approximately 0.1736 (one-tailed), so the two-tailed P-value is \(0.1736 \times 2 = 0.3472\).


Step 8: Conclusion

Compare the P-value to the significance level (e.g., \(\alpha = 0.05\)). If the P-value is less than \(\alpha\), reject the null hypothesis.

In this example, \(P = 0.3472\) is greater than 0.05, so we fail to reject the null hypothesis. There is no significant difference between the distributions of Group A and Group B.


< Go Back